00:00:00.000 Started by upstream project "autotest-per-patch" build number 132801 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.106 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.243 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.243 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.910 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.921 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.933 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.933 > git config core.sparsecheckout # timeout=10 00:00:07.945 > git read-tree -mu HEAD # timeout=10 00:00:07.960 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.987 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.987 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.079 [Pipeline] Start of Pipeline 00:00:08.088 [Pipeline] library 00:00:08.089 Loading library shm_lib@master 00:00:08.089 Library shm_lib@master is cached. Copying from home. 00:00:08.105 [Pipeline] node 00:00:08.114 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.115 [Pipeline] { 00:00:08.122 [Pipeline] catchError 00:00:08.123 [Pipeline] { 00:00:08.130 [Pipeline] wrap 00:00:08.136 [Pipeline] { 00:00:08.141 [Pipeline] stage 00:00:08.142 [Pipeline] { (Prologue) 00:00:08.355 [Pipeline] sh 00:00:08.637 + logger -p user.info -t JENKINS-CI 00:00:08.655 [Pipeline] echo 00:00:08.657 Node: WFP3 00:00:08.665 [Pipeline] sh 00:00:08.967 [Pipeline] setCustomBuildProperty 00:00:08.980 [Pipeline] echo 00:00:08.982 Cleanup processes 00:00:08.987 [Pipeline] sh 00:00:09.272 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.272 1153026 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.283 [Pipeline] sh 00:00:09.567 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.567 ++ grep -v 'sudo pgrep' 00:00:09.567 ++ awk '{print $1}' 00:00:09.567 + sudo kill -9 00:00:09.567 + true 00:00:09.578 [Pipeline] cleanWs 00:00:09.586 [WS-CLEANUP] Deleting project workspace... 00:00:09.586 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.592 [WS-CLEANUP] done 00:00:09.595 [Pipeline] setCustomBuildProperty 00:00:09.607 [Pipeline] sh 00:00:09.890 + sudo git config --global --replace-all safe.directory '*' 00:00:09.993 [Pipeline] httpRequest 00:00:10.820 [Pipeline] echo 00:00:10.822 Sorcerer 10.211.164.112 is alive 00:00:10.833 [Pipeline] retry 00:00:10.835 [Pipeline] { 00:00:10.850 [Pipeline] httpRequest 00:00:10.855 HttpMethod: GET 00:00:10.855 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.856 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.872 Response Code: HTTP/1.1 200 OK 00:00:10.872 Success: Status code 200 is in the accepted range: 200,404 00:00:10.873 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.969 [Pipeline] } 00:00:17.986 [Pipeline] // retry 00:00:17.993 [Pipeline] sh 00:00:18.285 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.300 [Pipeline] httpRequest 00:00:18.899 [Pipeline] echo 00:00:18.901 Sorcerer 10.211.164.112 is alive 00:00:18.910 [Pipeline] retry 00:00:18.912 [Pipeline] { 00:00:18.925 [Pipeline] httpRequest 00:00:18.930 HttpMethod: GET 00:00:18.930 URL: http://10.211.164.112/packages/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:00:18.931 Sending request to url: http://10.211.164.112/packages/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:00:18.942 Response Code: HTTP/1.1 200 OK 00:00:18.942 Success: Status code 200 is in the accepted range: 200,404 00:00:18.942 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:01:00.135 [Pipeline] } 00:01:00.154 [Pipeline] // retry 00:01:00.162 [Pipeline] sh 00:01:00.450 + tar --no-same-owner -xf spdk_3318278a6b7e81edb06174f0a9d84218a31af88f.tar.gz 00:01:03.003 [Pipeline] sh 00:01:03.291 + git -C spdk log --oneline -n5 00:01:03.291 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:01:03.291 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:03.291 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:03.291 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:03.291 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:03.303 [Pipeline] } 00:01:03.317 [Pipeline] // stage 00:01:03.326 [Pipeline] stage 00:01:03.328 [Pipeline] { (Prepare) 00:01:03.345 [Pipeline] writeFile 00:01:03.361 [Pipeline] sh 00:01:03.649 + logger -p user.info -t JENKINS-CI 00:01:03.660 [Pipeline] sh 00:01:03.941 + logger -p user.info -t JENKINS-CI 00:01:03.953 [Pipeline] sh 00:01:04.238 + cat autorun-spdk.conf 00:01:04.238 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.238 SPDK_TEST_NVMF=1 00:01:04.238 SPDK_TEST_NVME_CLI=1 00:01:04.238 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.238 SPDK_TEST_NVMF_NICS=e810 00:01:04.238 SPDK_TEST_VFIOUSER=1 00:01:04.238 SPDK_RUN_UBSAN=1 00:01:04.238 NET_TYPE=phy 00:01:04.245 RUN_NIGHTLY=0 00:01:04.249 [Pipeline] readFile 00:01:04.269 [Pipeline] withEnv 00:01:04.271 [Pipeline] { 00:01:04.281 [Pipeline] sh 00:01:04.568 + set -ex 00:01:04.568 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.568 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.568 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.568 ++ SPDK_TEST_NVMF=1 00:01:04.568 ++ SPDK_TEST_NVME_CLI=1 00:01:04.568 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.568 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.568 ++ SPDK_TEST_VFIOUSER=1 00:01:04.568 ++ SPDK_RUN_UBSAN=1 00:01:04.568 ++ NET_TYPE=phy 00:01:04.569 ++ RUN_NIGHTLY=0 00:01:04.569 + case $SPDK_TEST_NVMF_NICS in 00:01:04.569 + DRIVERS=ice 00:01:04.569 + [[ tcp == \r\d\m\a ]] 00:01:04.569 + [[ -n ice ]] 00:01:04.569 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.569 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.569 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.569 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.569 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.569 + true 00:01:04.569 + for D in $DRIVERS 00:01:04.569 + sudo modprobe ice 00:01:04.569 + exit 0 00:01:04.578 [Pipeline] } 00:01:04.588 [Pipeline] // withEnv 00:01:04.593 [Pipeline] } 00:01:04.604 [Pipeline] // stage 00:01:04.611 [Pipeline] catchError 00:01:04.613 [Pipeline] { 00:01:04.625 [Pipeline] timeout 00:01:04.625 Timeout set to expire in 1 hr 0 min 00:01:04.626 [Pipeline] { 00:01:04.640 [Pipeline] stage 00:01:04.641 [Pipeline] { (Tests) 00:01:04.655 [Pipeline] sh 00:01:04.943 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.943 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.943 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.943 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:04.943 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.943 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:04.943 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.943 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.943 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:04.943 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.943 + source /etc/os-release 00:01:04.943 ++ NAME='Fedora Linux' 00:01:04.943 ++ VERSION='39 (Cloud Edition)' 00:01:04.943 ++ ID=fedora 00:01:04.943 ++ VERSION_ID=39 00:01:04.943 ++ VERSION_CODENAME= 00:01:04.943 ++ PLATFORM_ID=platform:f39 00:01:04.943 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:04.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.943 ++ LOGO=fedora-logo-icon 00:01:04.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:04.943 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:04.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:04.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:04.943 ++ SUPPORT_END=2024-11-12 00:01:04.943 ++ VARIANT='Cloud Edition' 00:01:04.943 ++ VARIANT_ID=cloud 00:01:04.943 + uname -a 00:01:04.943 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:04.943 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.551 Hugepages 00:01:07.551 node hugesize free / total 00:01:07.551 node0 1048576kB 0 / 0 00:01:07.551 node0 2048kB 0 / 0 00:01:07.551 node1 1048576kB 0 / 0 00:01:07.551 node1 2048kB 0 / 0 00:01:07.551 00:01:07.551 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.551 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:07.551 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:07.551 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:07.814 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:07.814 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:07.814 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:07.815 + rm -f /tmp/spdk-ld-path 00:01:07.815 + source autorun-spdk.conf 00:01:07.815 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.815 ++ SPDK_TEST_NVMF=1 00:01:07.815 ++ SPDK_TEST_NVME_CLI=1 00:01:07.815 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.815 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.815 ++ SPDK_TEST_VFIOUSER=1 00:01:07.815 ++ SPDK_RUN_UBSAN=1 00:01:07.815 ++ NET_TYPE=phy 00:01:07.815 ++ RUN_NIGHTLY=0 00:01:07.815 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.815 + [[ -n '' ]] 00:01:07.815 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.815 + for M in /var/spdk/build-*-manifest.txt 00:01:07.815 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:07.815 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.815 + for M in /var/spdk/build-*-manifest.txt 00:01:07.815 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.815 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.815 + for M in /var/spdk/build-*-manifest.txt 00:01:07.815 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.815 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.815 ++ uname 00:01:07.815 + [[ Linux == \L\i\n\u\x ]] 00:01:07.815 + sudo dmesg -T 00:01:07.815 + sudo dmesg --clear 00:01:07.815 + dmesg_pid=1154513 00:01:07.815 + [[ Fedora Linux == FreeBSD ]] 00:01:07.815 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.815 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.815 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.815 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.815 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.815 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.815 + sudo dmesg -Tw 00:01:07.815 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.815 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.815 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.815 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.815 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.815 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.815 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.815 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.815 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.075 14:54:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:08.075 14:54:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:08.075 14:54:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:08.075 14:54:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:08.075 14:54:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.075 14:54:09 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:08.075 14:54:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:08.075 14:54:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:08.075 14:54:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.075 14:54:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.075 14:54:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.075 14:54:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.075 14:54:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.075 14:54:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.075 14:54:09 -- paths/export.sh@5 -- $ export PATH 00:01:08.075 14:54:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.075 14:54:09 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:08.075 14:54:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:08.075 14:54:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733752449.XXXXXX 00:01:08.075 14:54:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733752449.PhdZ7s 00:01:08.075 14:54:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:08.075 14:54:09 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:08.075 14:54:09 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:08.075 14:54:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.075 14:54:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.075 14:54:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:08.075 14:54:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:08.075 14:54:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.075 14:54:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:08.076 14:54:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:08.076 14:54:09 -- pm/common@17 -- $ local monitor 00:01:08.076 14:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.076 14:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.076 14:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.076 14:54:09 -- pm/common@21 -- $ date +%s 00:01:08.076 14:54:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.076 14:54:09 -- pm/common@21 -- $ date +%s 00:01:08.076 14:54:09 -- pm/common@25 -- $ sleep 1 00:01:08.076 14:54:09 -- pm/common@21 -- $ date +%s 00:01:08.076 14:54:09 -- pm/common@21 -- $ date +%s 00:01:08.076 14:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733752449 00:01:08.076 14:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733752449 00:01:08.076 14:54:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733752449 00:01:08.076 14:54:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733752449 00:01:08.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733752449_collect-vmstat.pm.log 00:01:08.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733752449_collect-cpu-load.pm.log 00:01:08.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733752449_collect-cpu-temp.pm.log 00:01:08.076 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733752449_collect-bmc-pm.bmc.pm.log 00:01:09.015 14:54:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:09.015 14:54:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:09.015 14:54:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:09.015 14:54:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.015 14:54:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:09.015 Mon Dec 9 01:54:10 PM UTC 2024 00:01:09.015 14:54:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:09.015 v25.01-pre-312-g3318278a6 00:01:09.015 14:54:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:09.015 14:54:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:09.015 14:54:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:09.015 14:54:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:09.015 14:54:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:09.015 14:54:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.015 ************************************ 00:01:09.015 START TEST ubsan 00:01:09.015 ************************************ 00:01:09.015 14:54:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:09.015 using ubsan 00:01:09.015 00:01:09.015 real 0m0.000s 00:01:09.015 user 0m0.000s 00:01:09.015 sys 0m0.000s 00:01:09.015 14:54:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:09.015 14:54:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:09.015 ************************************ 00:01:09.015 END TEST ubsan 00:01:09.015 ************************************ 00:01:09.274 14:54:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:09.274 14:54:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:09.274 14:54:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:09.274 14:54:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:09.274 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:09.274 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:09.532 Using 'verbs' RDMA provider 00:01:22.684 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.898 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.898 Creating mk/config.mk...done. 00:01:34.898 Creating mk/cc.flags.mk...done. 00:01:34.898 Type 'make' to build. 00:01:34.898 14:54:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:34.898 14:54:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:34.898 14:54:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:34.898 14:54:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.898 ************************************ 00:01:34.898 START TEST make 00:01:34.898 ************************************ 00:01:34.898 14:54:36 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:34.898 make[1]: Nothing to be done for 'all'. 00:01:36.816 The Meson build system 00:01:36.816 Version: 1.5.0 00:01:36.816 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.816 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.816 Build type: native build 00:01:36.816 Project name: libvfio-user 00:01:36.816 Project version: 0.0.1 00:01:36.816 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.816 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.816 Host machine cpu family: x86_64 00:01:36.816 Host machine cpu: x86_64 00:01:36.816 Run-time dependency threads found: YES 00:01:36.816 Library dl found: YES 00:01:36.816 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.816 Run-time dependency json-c found: YES 0.17 00:01:36.816 Run-time dependency cmocka found: YES 1.1.7 00:01:36.816 Program pytest-3 found: NO 00:01:36.816 Program flake8 found: NO 00:01:36.816 Program misspell-fixer found: NO 00:01:36.816 Program restructuredtext-lint found: NO 00:01:36.816 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.816 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.816 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.816 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.816 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.816 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.816 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.816 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.816 Build targets in project: 8 00:01:36.816 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.816 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.816 00:01:36.816 libvfio-user 0.0.1 00:01:36.816 00:01:36.816 User defined options 00:01:36.816 buildtype : debug 00:01:36.816 default_library: shared 00:01:36.816 libdir : /usr/local/lib 00:01:36.816 00:01:36.816 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.075 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.075 [2/37] Compiling C object samples/null.p/null.c.o 00:01:37.075 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.075 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.075 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.075 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.075 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.075 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.075 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.075 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.075 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.075 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.075 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.075 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.075 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.075 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.075 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.075 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.075 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.075 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.075 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.075 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.335 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.335 [24/37] Compiling C object samples/client.p/client.c.o 00:01:37.335 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.335 [26/37] Compiling C object samples/server.p/server.c.o 00:01:37.335 [27/37] Linking target samples/client 00:01:37.335 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.335 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.335 [30/37] Linking target test/unit_tests 00:01:37.335 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.335 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.594 [33/37] Linking target samples/gpio-pci-idio-16 00:01:37.594 [34/37] Linking target samples/server 00:01:37.594 [35/37] Linking target samples/lspci 00:01:37.594 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:37.594 [37/37] Linking target samples/null 00:01:37.594 INFO: autodetecting backend as ninja 00:01:37.594 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.594 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.854 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.854 ninja: no work to do. 00:01:43.130 The Meson build system 00:01:43.130 Version: 1.5.0 00:01:43.130 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.130 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.130 Build type: native build 00:01:43.130 Program cat found: YES (/usr/bin/cat) 00:01:43.130 Project name: DPDK 00:01:43.130 Project version: 24.03.0 00:01:43.130 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.130 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.130 Host machine cpu family: x86_64 00:01:43.130 Host machine cpu: x86_64 00:01:43.130 Message: ## Building in Developer Mode ## 00:01:43.130 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.130 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.130 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.130 Program python3 found: YES (/usr/bin/python3) 00:01:43.130 Program cat found: YES (/usr/bin/cat) 00:01:43.130 Compiler for C supports arguments -march=native: YES 00:01:43.130 Checking for size of "void *" : 8 00:01:43.130 Checking for size of "void *" : 8 (cached) 00:01:43.130 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:43.130 Library m found: YES 00:01:43.130 Library numa found: YES 00:01:43.130 Has header "numaif.h" : YES 00:01:43.130 Library fdt found: NO 00:01:43.130 Library execinfo found: NO 00:01:43.130 Has header "execinfo.h" : YES 00:01:43.130 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.130 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.130 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.130 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.130 Run-time dependency openssl found: YES 3.1.1 00:01:43.130 Run-time dependency libpcap found: YES 1.10.4 00:01:43.130 Has header "pcap.h" with dependency libpcap: YES 00:01:43.130 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.130 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.130 Compiler for C supports arguments -Wformat: YES 00:01:43.130 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.130 Compiler for C supports arguments -Wformat-security: NO 00:01:43.130 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.130 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.130 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.130 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.130 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.130 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.130 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.130 Compiler for C supports arguments -Wundef: YES 00:01:43.131 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.131 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.131 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.131 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.131 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.131 Program objdump found: YES (/usr/bin/objdump) 00:01:43.131 Compiler for C supports arguments -mavx512f: YES 00:01:43.131 Checking if "AVX512 checking" compiles: YES 00:01:43.131 Fetching value of define "__SSE4_2__" : 1 00:01:43.131 Fetching value of define "__AES__" : 1 00:01:43.131 Fetching value of define "__AVX__" : 1 00:01:43.131 Fetching value of define "__AVX2__" : 1 00:01:43.131 Fetching value of define "__AVX512BW__" : 1 00:01:43.131 Fetching value of define "__AVX512CD__" : 1 00:01:43.131 Fetching value of define "__AVX512DQ__" : 1 00:01:43.131 Fetching value of define "__AVX512F__" : 1 00:01:43.131 Fetching value of define "__AVX512VL__" : 1 00:01:43.131 Fetching value of define "__PCLMUL__" : 1 00:01:43.131 Fetching value of define "__RDRND__" : 1 00:01:43.131 Fetching value of define "__RDSEED__" : 1 00:01:43.131 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.131 Fetching value of define "__znver1__" : (undefined) 00:01:43.131 Fetching value of define "__znver2__" : (undefined) 00:01:43.131 Fetching value of define "__znver3__" : (undefined) 00:01:43.131 Fetching value of define "__znver4__" : (undefined) 00:01:43.131 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.131 Message: lib/log: Defining dependency "log" 00:01:43.131 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.131 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.131 Checking for function "getentropy" : NO 00:01:43.131 Message: lib/eal: Defining dependency "eal" 00:01:43.131 Message: lib/ring: Defining dependency "ring" 00:01:43.131 Message: lib/rcu: Defining dependency "rcu" 00:01:43.131 Message: lib/mempool: Defining dependency "mempool" 00:01:43.131 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.131 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.131 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:43.131 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:43.131 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:43.131 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:43.131 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:43.131 Compiler for C supports arguments -mpclmul: YES 00:01:43.131 Compiler for C supports arguments -maes: YES 00:01:43.131 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.131 Compiler for C supports arguments -mavx512bw: YES 00:01:43.131 Compiler for C supports arguments -mavx512dq: YES 00:01:43.131 Compiler for C supports arguments -mavx512vl: YES 00:01:43.131 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.131 Compiler for C supports arguments -mavx2: YES 00:01:43.131 Compiler for C supports arguments -mavx: YES 00:01:43.131 Message: lib/net: Defining dependency "net" 00:01:43.131 Message: lib/meter: Defining dependency "meter" 00:01:43.131 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.131 Message: lib/pci: Defining dependency "pci" 00:01:43.131 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.131 Message: lib/hash: Defining dependency "hash" 00:01:43.131 Message: lib/timer: Defining dependency "timer" 00:01:43.131 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.131 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.131 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.131 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.131 Message: lib/power: Defining dependency "power" 00:01:43.131 Message: lib/reorder: Defining dependency "reorder" 00:01:43.131 Message: lib/security: Defining dependency "security" 00:01:43.131 Has header "linux/userfaultfd.h" : YES 00:01:43.131 Has header "linux/vduse.h" : YES 00:01:43.131 Message: lib/vhost: Defining dependency "vhost" 00:01:43.131 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.131 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.131 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.131 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.131 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.131 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.131 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.131 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.131 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.131 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.131 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:43.131 Configuring doxy-api-html.conf using configuration 00:01:43.131 Configuring doxy-api-man.conf using configuration 00:01:43.131 Program mandb found: YES (/usr/bin/mandb) 00:01:43.131 Program sphinx-build found: NO 00:01:43.131 Configuring rte_build_config.h using configuration 00:01:43.131 Message: 00:01:43.131 ================= 00:01:43.131 Applications Enabled 00:01:43.131 ================= 00:01:43.131 00:01:43.131 apps: 00:01:43.131 00:01:43.131 00:01:43.131 Message: 00:01:43.131 ================= 00:01:43.131 Libraries Enabled 00:01:43.131 ================= 00:01:43.131 00:01:43.131 libs: 00:01:43.131 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.131 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.131 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.131 00:01:43.131 Message: 00:01:43.131 =============== 00:01:43.131 Drivers Enabled 00:01:43.131 =============== 00:01:43.131 00:01:43.131 common: 00:01:43.131 00:01:43.131 bus: 00:01:43.131 pci, vdev, 00:01:43.131 mempool: 00:01:43.131 ring, 00:01:43.131 dma: 00:01:43.131 00:01:43.131 net: 00:01:43.131 00:01:43.131 crypto: 00:01:43.131 00:01:43.131 compress: 00:01:43.131 00:01:43.131 vdpa: 00:01:43.131 00:01:43.131 00:01:43.131 Message: 00:01:43.131 ================= 00:01:43.131 Content Skipped 00:01:43.131 ================= 00:01:43.131 00:01:43.131 apps: 00:01:43.131 dumpcap: explicitly disabled via build config 00:01:43.131 graph: explicitly disabled via build config 00:01:43.131 pdump: explicitly disabled via build config 00:01:43.131 proc-info: explicitly disabled via build config 00:01:43.131 test-acl: explicitly disabled via build config 00:01:43.131 test-bbdev: explicitly disabled via build config 00:01:43.131 test-cmdline: explicitly disabled via build config 00:01:43.131 test-compress-perf: explicitly disabled via build config 00:01:43.131 test-crypto-perf: explicitly disabled via build config 00:01:43.131 test-dma-perf: explicitly disabled via build config 00:01:43.131 test-eventdev: explicitly disabled via build config 00:01:43.131 test-fib: explicitly disabled via build config 00:01:43.131 test-flow-perf: explicitly disabled via build config 00:01:43.131 test-gpudev: explicitly disabled via build config 00:01:43.131 test-mldev: explicitly disabled via build config 00:01:43.131 test-pipeline: explicitly disabled via build config 00:01:43.131 test-pmd: explicitly disabled via build config 00:01:43.131 test-regex: explicitly disabled via build config 00:01:43.131 test-sad: explicitly disabled via build config 00:01:43.131 test-security-perf: explicitly disabled via build config 00:01:43.131 00:01:43.131 libs: 00:01:43.131 argparse: explicitly disabled via build config 00:01:43.131 metrics: explicitly disabled via build config 00:01:43.131 acl: explicitly disabled via build config 00:01:43.131 bbdev: explicitly disabled via build config 00:01:43.131 bitratestats: explicitly disabled via build config 00:01:43.131 bpf: explicitly disabled via build config 00:01:43.131 cfgfile: explicitly disabled via build config 00:01:43.131 distributor: explicitly disabled via build config 00:01:43.131 efd: explicitly disabled via build config 00:01:43.131 eventdev: explicitly disabled via build config 00:01:43.131 dispatcher: explicitly disabled via build config 00:01:43.131 gpudev: explicitly disabled via build config 00:01:43.131 gro: explicitly disabled via build config 00:01:43.131 gso: explicitly disabled via build config 00:01:43.131 ip_frag: explicitly disabled via build config 00:01:43.131 jobstats: explicitly disabled via build config 00:01:43.131 latencystats: explicitly disabled via build config 00:01:43.131 lpm: explicitly disabled via build config 00:01:43.131 member: explicitly disabled via build config 00:01:43.131 pcapng: explicitly disabled via build config 00:01:43.131 rawdev: explicitly disabled via build config 00:01:43.131 regexdev: explicitly disabled via build config 00:01:43.131 mldev: explicitly disabled via build config 00:01:43.131 rib: explicitly disabled via build config 00:01:43.131 sched: explicitly disabled via build config 00:01:43.131 stack: explicitly disabled via build config 00:01:43.131 ipsec: explicitly disabled via build config 00:01:43.131 pdcp: explicitly disabled via build config 00:01:43.131 fib: explicitly disabled via build config 00:01:43.131 port: explicitly disabled via build config 00:01:43.131 pdump: explicitly disabled via build config 00:01:43.131 table: explicitly disabled via build config 00:01:43.131 pipeline: explicitly disabled via build config 00:01:43.131 graph: explicitly disabled via build config 00:01:43.131 node: explicitly disabled via build config 00:01:43.131 00:01:43.131 drivers: 00:01:43.131 common/cpt: not in enabled drivers build config 00:01:43.131 common/dpaax: not in enabled drivers build config 00:01:43.131 common/iavf: not in enabled drivers build config 00:01:43.132 common/idpf: not in enabled drivers build config 00:01:43.132 common/ionic: not in enabled drivers build config 00:01:43.132 common/mvep: not in enabled drivers build config 00:01:43.132 common/octeontx: not in enabled drivers build config 00:01:43.132 bus/auxiliary: not in enabled drivers build config 00:01:43.132 bus/cdx: not in enabled drivers build config 00:01:43.132 bus/dpaa: not in enabled drivers build config 00:01:43.132 bus/fslmc: not in enabled drivers build config 00:01:43.132 bus/ifpga: not in enabled drivers build config 00:01:43.132 bus/platform: not in enabled drivers build config 00:01:43.132 bus/uacce: not in enabled drivers build config 00:01:43.132 bus/vmbus: not in enabled drivers build config 00:01:43.132 common/cnxk: not in enabled drivers build config 00:01:43.132 common/mlx5: not in enabled drivers build config 00:01:43.132 common/nfp: not in enabled drivers build config 00:01:43.132 common/nitrox: not in enabled drivers build config 00:01:43.132 common/qat: not in enabled drivers build config 00:01:43.132 common/sfc_efx: not in enabled drivers build config 00:01:43.132 mempool/bucket: not in enabled drivers build config 00:01:43.132 mempool/cnxk: not in enabled drivers build config 00:01:43.132 mempool/dpaa: not in enabled drivers build config 00:01:43.132 mempool/dpaa2: not in enabled drivers build config 00:01:43.132 mempool/octeontx: not in enabled drivers build config 00:01:43.132 mempool/stack: not in enabled drivers build config 00:01:43.132 dma/cnxk: not in enabled drivers build config 00:01:43.132 dma/dpaa: not in enabled drivers build config 00:01:43.132 dma/dpaa2: not in enabled drivers build config 00:01:43.132 dma/hisilicon: not in enabled drivers build config 00:01:43.132 dma/idxd: not in enabled drivers build config 00:01:43.132 dma/ioat: not in enabled drivers build config 00:01:43.132 dma/skeleton: not in enabled drivers build config 00:01:43.132 net/af_packet: not in enabled drivers build config 00:01:43.132 net/af_xdp: not in enabled drivers build config 00:01:43.132 net/ark: not in enabled drivers build config 00:01:43.132 net/atlantic: not in enabled drivers build config 00:01:43.132 net/avp: not in enabled drivers build config 00:01:43.132 net/axgbe: not in enabled drivers build config 00:01:43.132 net/bnx2x: not in enabled drivers build config 00:01:43.132 net/bnxt: not in enabled drivers build config 00:01:43.132 net/bonding: not in enabled drivers build config 00:01:43.132 net/cnxk: not in enabled drivers build config 00:01:43.132 net/cpfl: not in enabled drivers build config 00:01:43.132 net/cxgbe: not in enabled drivers build config 00:01:43.132 net/dpaa: not in enabled drivers build config 00:01:43.132 net/dpaa2: not in enabled drivers build config 00:01:43.132 net/e1000: not in enabled drivers build config 00:01:43.132 net/ena: not in enabled drivers build config 00:01:43.132 net/enetc: not in enabled drivers build config 00:01:43.132 net/enetfec: not in enabled drivers build config 00:01:43.132 net/enic: not in enabled drivers build config 00:01:43.132 net/failsafe: not in enabled drivers build config 00:01:43.132 net/fm10k: not in enabled drivers build config 00:01:43.132 net/gve: not in enabled drivers build config 00:01:43.132 net/hinic: not in enabled drivers build config 00:01:43.132 net/hns3: not in enabled drivers build config 00:01:43.132 net/i40e: not in enabled drivers build config 00:01:43.132 net/iavf: not in enabled drivers build config 00:01:43.132 net/ice: not in enabled drivers build config 00:01:43.132 net/idpf: not in enabled drivers build config 00:01:43.132 net/igc: not in enabled drivers build config 00:01:43.132 net/ionic: not in enabled drivers build config 00:01:43.132 net/ipn3ke: not in enabled drivers build config 00:01:43.132 net/ixgbe: not in enabled drivers build config 00:01:43.132 net/mana: not in enabled drivers build config 00:01:43.132 net/memif: not in enabled drivers build config 00:01:43.132 net/mlx4: not in enabled drivers build config 00:01:43.132 net/mlx5: not in enabled drivers build config 00:01:43.132 net/mvneta: not in enabled drivers build config 00:01:43.132 net/mvpp2: not in enabled drivers build config 00:01:43.132 net/netvsc: not in enabled drivers build config 00:01:43.132 net/nfb: not in enabled drivers build config 00:01:43.132 net/nfp: not in enabled drivers build config 00:01:43.132 net/ngbe: not in enabled drivers build config 00:01:43.132 net/null: not in enabled drivers build config 00:01:43.132 net/octeontx: not in enabled drivers build config 00:01:43.132 net/octeon_ep: not in enabled drivers build config 00:01:43.132 net/pcap: not in enabled drivers build config 00:01:43.132 net/pfe: not in enabled drivers build config 00:01:43.132 net/qede: not in enabled drivers build config 00:01:43.132 net/ring: not in enabled drivers build config 00:01:43.132 net/sfc: not in enabled drivers build config 00:01:43.132 net/softnic: not in enabled drivers build config 00:01:43.132 net/tap: not in enabled drivers build config 00:01:43.132 net/thunderx: not in enabled drivers build config 00:01:43.132 net/txgbe: not in enabled drivers build config 00:01:43.132 net/vdev_netvsc: not in enabled drivers build config 00:01:43.132 net/vhost: not in enabled drivers build config 00:01:43.132 net/virtio: not in enabled drivers build config 00:01:43.132 net/vmxnet3: not in enabled drivers build config 00:01:43.132 raw/*: missing internal dependency, "rawdev" 00:01:43.132 crypto/armv8: not in enabled drivers build config 00:01:43.132 crypto/bcmfs: not in enabled drivers build config 00:01:43.132 crypto/caam_jr: not in enabled drivers build config 00:01:43.132 crypto/ccp: not in enabled drivers build config 00:01:43.132 crypto/cnxk: not in enabled drivers build config 00:01:43.132 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.132 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.132 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.132 crypto/mlx5: not in enabled drivers build config 00:01:43.132 crypto/mvsam: not in enabled drivers build config 00:01:43.132 crypto/nitrox: not in enabled drivers build config 00:01:43.132 crypto/null: not in enabled drivers build config 00:01:43.132 crypto/octeontx: not in enabled drivers build config 00:01:43.132 crypto/openssl: not in enabled drivers build config 00:01:43.132 crypto/scheduler: not in enabled drivers build config 00:01:43.132 crypto/uadk: not in enabled drivers build config 00:01:43.132 crypto/virtio: not in enabled drivers build config 00:01:43.132 compress/isal: not in enabled drivers build config 00:01:43.132 compress/mlx5: not in enabled drivers build config 00:01:43.132 compress/nitrox: not in enabled drivers build config 00:01:43.132 compress/octeontx: not in enabled drivers build config 00:01:43.132 compress/zlib: not in enabled drivers build config 00:01:43.132 regex/*: missing internal dependency, "regexdev" 00:01:43.132 ml/*: missing internal dependency, "mldev" 00:01:43.132 vdpa/ifc: not in enabled drivers build config 00:01:43.132 vdpa/mlx5: not in enabled drivers build config 00:01:43.132 vdpa/nfp: not in enabled drivers build config 00:01:43.132 vdpa/sfc: not in enabled drivers build config 00:01:43.132 event/*: missing internal dependency, "eventdev" 00:01:43.132 baseband/*: missing internal dependency, "bbdev" 00:01:43.132 gpu/*: missing internal dependency, "gpudev" 00:01:43.132 00:01:43.132 00:01:43.132 Build targets in project: 85 00:01:43.132 00:01:43.132 DPDK 24.03.0 00:01:43.132 00:01:43.132 User defined options 00:01:43.132 buildtype : debug 00:01:43.132 default_library : shared 00:01:43.132 libdir : lib 00:01:43.132 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.132 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.132 c_link_args : 00:01:43.132 cpu_instruction_set: native 00:01:43.132 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:43.132 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:43.132 enable_docs : false 00:01:43.132 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:43.132 enable_kmods : false 00:01:43.132 max_lcores : 128 00:01:43.132 tests : false 00:01:43.132 00:01:43.132 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.707 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.707 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.707 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.707 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.707 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.707 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.707 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.707 [7/268] Linking static target lib/librte_kvargs.a 00:01:43.707 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.707 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.707 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.707 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.707 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.707 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.707 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.707 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:43.707 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.707 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.707 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.969 [19/268] Linking static target lib/librte_log.a 00:01:43.969 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.969 [21/268] Linking static target lib/librte_pci.a 00:01:43.969 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.969 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.969 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.232 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.233 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.233 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.233 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.233 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.233 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.233 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.233 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.233 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.233 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.233 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.233 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.233 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.233 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.233 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.233 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.233 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.233 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.233 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.233 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.233 [45/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.233 [46/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.233 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.233 [48/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.233 [49/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.233 [50/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.233 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.233 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.233 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.233 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.233 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.233 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.233 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.233 [58/268] Linking static target lib/librte_ring.a 00:01:44.233 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.233 [60/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.233 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.233 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.233 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.233 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.233 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.233 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.233 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.233 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.233 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.233 [70/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.233 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.233 [72/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.233 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.233 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.233 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.233 [76/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.493 [77/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.493 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.493 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.493 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.493 [81/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.493 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.493 [83/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.493 [84/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.493 [85/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.493 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.493 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.493 [88/268] Linking static target lib/librte_meter.a 00:01:44.493 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.493 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.493 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.493 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.493 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.493 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.493 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.493 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.493 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.493 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.493 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.493 [100/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.493 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.493 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.493 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.493 [104/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.493 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.493 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.493 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.493 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.493 [109/268] Linking static target lib/librte_telemetry.a 00:01:44.493 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.493 [111/268] Linking static target lib/librte_mempool.a 00:01:44.493 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.493 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.493 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.493 [115/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.493 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.493 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.493 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.493 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.493 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.493 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.493 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.493 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.493 [124/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.493 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.493 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.493 [127/268] Linking static target lib/librte_mbuf.a 00:01:44.493 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.493 [129/268] Linking static target lib/librte_rcu.a 00:01:44.493 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.493 [131/268] Linking static target lib/librte_net.a 00:01:44.493 [132/268] Linking static target lib/librte_eal.a 00:01:44.493 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.493 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.493 [135/268] Linking static target lib/librte_cmdline.a 00:01:44.493 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:44.493 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.493 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.752 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.752 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.752 [143/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.752 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.752 [146/268] Linking target lib/librte_log.so.24.1 00:01:44.752 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.752 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.752 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.752 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.752 [151/268] Linking static target lib/librte_timer.a 00:01:44.752 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.752 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.752 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.752 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.752 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.752 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.752 [158/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.752 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.752 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.752 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.752 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.752 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.752 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.752 [165/268] Linking static target lib/librte_compressdev.a 00:01:44.752 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.752 [167/268] Linking static target lib/librte_reorder.a 00:01:44.752 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.752 [169/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.752 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.752 [171/268] Linking static target lib/librte_dmadev.a 00:01:44.752 [172/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.752 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.752 [175/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [176/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.752 [177/268] Linking static target lib/librte_power.a 00:01:45.011 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.011 [179/268] Linking target lib/librte_kvargs.so.24.1 00:01:45.011 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:45.011 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.011 [182/268] Linking target lib/librte_telemetry.so.24.1 00:01:45.011 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:45.011 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:45.011 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.011 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:45.011 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:45.011 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:45.011 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:45.011 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.011 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.011 [192/268] Linking static target lib/librte_security.a 00:01:45.011 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.011 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.011 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.011 [196/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.011 [197/268] Linking static target lib/librte_hash.a 00:01:45.011 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:45.011 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.270 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.270 [201/268] Linking static target lib/librte_cryptodev.a 00:01:45.270 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.270 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.270 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.270 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.270 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.270 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:45.270 [208/268] Linking static target drivers/librte_bus_vdev.a 00:01:45.270 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.270 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.270 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.270 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.270 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.270 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.270 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.270 [216/268] Linking static target drivers/librte_bus_pci.a 00:01:45.529 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.529 [218/268] Linking static target lib/librte_ethdev.a 00:01:45.529 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.529 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.529 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.529 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.788 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.788 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.788 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.046 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.046 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.983 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.983 [229/268] Linking static target lib/librte_vhost.a 00:01:46.983 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.887 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.154 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.719 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.719 [234/268] Linking target lib/librte_eal.so.24.1 00:01:54.720 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:54.720 [236/268] Linking target lib/librte_ring.so.24.1 00:01:54.720 [237/268] Linking target lib/librte_pci.so.24.1 00:01:54.720 [238/268] Linking target lib/librte_timer.so.24.1 00:01:54.720 [239/268] Linking target lib/librte_meter.so.24.1 00:01:54.720 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:54.720 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:54.977 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:54.977 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:54.977 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:54.977 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.977 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:54.977 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:54.977 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:54.977 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.235 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.235 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.235 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:55.235 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:55.235 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:55.235 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:55.235 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:55.235 [257/268] Linking target lib/librte_net.so.24.1 00:01:55.235 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:55.493 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:55.493 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:55.493 [261/268] Linking target lib/librte_hash.so.24.1 00:01:55.493 [262/268] Linking target lib/librte_security.so.24.1 00:01:55.493 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:55.493 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:55.751 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:55.751 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.751 [267/268] Linking target lib/librte_power.so.24.1 00:01:55.751 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:55.751 INFO: autodetecting backend as ninja 00:01:55.751 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:05.723 CC lib/log/log.o 00:02:05.723 CC lib/log/log_flags.o 00:02:05.723 CC lib/log/log_deprecated.o 00:02:05.723 CC lib/ut/ut.o 00:02:05.723 CC lib/ut_mock/mock.o 00:02:05.982 LIB libspdk_log.a 00:02:05.982 LIB libspdk_ut.a 00:02:05.982 LIB libspdk_ut_mock.a 00:02:05.982 SO libspdk_log.so.7.1 00:02:05.982 SO libspdk_ut.so.2.0 00:02:05.982 SO libspdk_ut_mock.so.6.0 00:02:05.982 SYMLINK libspdk_log.so 00:02:05.982 SYMLINK libspdk_ut.so 00:02:05.982 SYMLINK libspdk_ut_mock.so 00:02:06.550 CXX lib/trace_parser/trace.o 00:02:06.550 CC lib/util/base64.o 00:02:06.550 CC lib/util/bit_array.o 00:02:06.550 CC lib/util/cpuset.o 00:02:06.550 CC lib/util/crc32.o 00:02:06.550 CC lib/util/crc16.o 00:02:06.550 CC lib/util/crc32c.o 00:02:06.550 CC lib/util/crc32_ieee.o 00:02:06.550 CC lib/util/crc64.o 00:02:06.550 CC lib/dma/dma.o 00:02:06.550 CC lib/ioat/ioat.o 00:02:06.550 CC lib/util/dif.o 00:02:06.550 CC lib/util/fd.o 00:02:06.550 CC lib/util/fd_group.o 00:02:06.550 CC lib/util/file.o 00:02:06.550 CC lib/util/hexlify.o 00:02:06.550 CC lib/util/iov.o 00:02:06.550 CC lib/util/math.o 00:02:06.550 CC lib/util/net.o 00:02:06.550 CC lib/util/pipe.o 00:02:06.550 CC lib/util/strerror_tls.o 00:02:06.550 CC lib/util/string.o 00:02:06.550 CC lib/util/uuid.o 00:02:06.550 CC lib/util/xor.o 00:02:06.550 CC lib/util/zipf.o 00:02:06.550 CC lib/util/md5.o 00:02:06.550 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.550 CC lib/vfio_user/host/vfio_user.o 00:02:06.550 LIB libspdk_dma.a 00:02:06.809 SO libspdk_dma.so.5.0 00:02:06.809 LIB libspdk_ioat.a 00:02:06.809 SYMLINK libspdk_dma.so 00:02:06.809 SO libspdk_ioat.so.7.0 00:02:06.809 LIB libspdk_vfio_user.a 00:02:06.809 SYMLINK libspdk_ioat.so 00:02:06.809 SO libspdk_vfio_user.so.5.0 00:02:06.809 SYMLINK libspdk_vfio_user.so 00:02:06.809 LIB libspdk_util.a 00:02:07.068 SO libspdk_util.so.10.1 00:02:07.068 SYMLINK libspdk_util.so 00:02:07.068 LIB libspdk_trace_parser.a 00:02:07.068 SO libspdk_trace_parser.so.6.0 00:02:07.327 SYMLINK libspdk_trace_parser.so 00:02:07.327 CC lib/env_dpdk/env.o 00:02:07.327 CC lib/env_dpdk/memory.o 00:02:07.327 CC lib/env_dpdk/pci.o 00:02:07.327 CC lib/env_dpdk/init.o 00:02:07.327 CC lib/env_dpdk/threads.o 00:02:07.327 CC lib/env_dpdk/pci_ioat.o 00:02:07.327 CC lib/env_dpdk/pci_virtio.o 00:02:07.327 CC lib/env_dpdk/pci_vmd.o 00:02:07.327 CC lib/env_dpdk/pci_idxd.o 00:02:07.327 CC lib/env_dpdk/pci_event.o 00:02:07.327 CC lib/rdma_utils/rdma_utils.o 00:02:07.327 CC lib/vmd/vmd.o 00:02:07.327 CC lib/env_dpdk/sigbus_handler.o 00:02:07.327 CC lib/vmd/led.o 00:02:07.327 CC lib/json/json_parse.o 00:02:07.327 CC lib/conf/conf.o 00:02:07.327 CC lib/env_dpdk/pci_dpdk.o 00:02:07.327 CC lib/json/json_util.o 00:02:07.327 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.327 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.327 CC lib/json/json_write.o 00:02:07.327 CC lib/idxd/idxd.o 00:02:07.327 CC lib/idxd/idxd_user.o 00:02:07.327 CC lib/idxd/idxd_kernel.o 00:02:07.585 LIB libspdk_conf.a 00:02:07.585 LIB libspdk_rdma_utils.a 00:02:07.585 SO libspdk_conf.so.6.0 00:02:07.585 LIB libspdk_json.a 00:02:07.843 SO libspdk_rdma_utils.so.1.0 00:02:07.843 SO libspdk_json.so.6.0 00:02:07.843 SYMLINK libspdk_conf.so 00:02:07.843 SYMLINK libspdk_rdma_utils.so 00:02:07.843 SYMLINK libspdk_json.so 00:02:07.843 LIB libspdk_idxd.a 00:02:07.843 LIB libspdk_vmd.a 00:02:07.843 SO libspdk_idxd.so.12.1 00:02:08.102 SO libspdk_vmd.so.6.0 00:02:08.102 SYMLINK libspdk_idxd.so 00:02:08.102 SYMLINK libspdk_vmd.so 00:02:08.102 CC lib/rdma_provider/common.o 00:02:08.102 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.102 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.102 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.102 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.102 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.361 LIB libspdk_rdma_provider.a 00:02:08.361 SO libspdk_rdma_provider.so.7.0 00:02:08.361 LIB libspdk_jsonrpc.a 00:02:08.361 SYMLINK libspdk_rdma_provider.so 00:02:08.361 SO libspdk_jsonrpc.so.6.0 00:02:08.361 SYMLINK libspdk_jsonrpc.so 00:02:08.361 LIB libspdk_env_dpdk.a 00:02:08.620 SO libspdk_env_dpdk.so.15.1 00:02:08.620 SYMLINK libspdk_env_dpdk.so 00:02:08.620 CC lib/rpc/rpc.o 00:02:08.878 LIB libspdk_rpc.a 00:02:08.878 SO libspdk_rpc.so.6.0 00:02:08.879 SYMLINK libspdk_rpc.so 00:02:09.446 CC lib/keyring/keyring.o 00:02:09.446 CC lib/keyring/keyring_rpc.o 00:02:09.446 CC lib/notify/notify.o 00:02:09.446 CC lib/trace/trace.o 00:02:09.446 CC lib/notify/notify_rpc.o 00:02:09.446 CC lib/trace/trace_flags.o 00:02:09.446 CC lib/trace/trace_rpc.o 00:02:09.446 LIB libspdk_notify.a 00:02:09.446 SO libspdk_notify.so.6.0 00:02:09.446 LIB libspdk_keyring.a 00:02:09.446 LIB libspdk_trace.a 00:02:09.446 SO libspdk_keyring.so.2.0 00:02:09.446 SO libspdk_trace.so.11.0 00:02:09.446 SYMLINK libspdk_notify.so 00:02:09.705 SYMLINK libspdk_keyring.so 00:02:09.705 SYMLINK libspdk_trace.so 00:02:09.964 CC lib/thread/thread.o 00:02:09.964 CC lib/thread/iobuf.o 00:02:09.964 CC lib/sock/sock.o 00:02:09.964 CC lib/sock/sock_rpc.o 00:02:10.222 LIB libspdk_sock.a 00:02:10.222 SO libspdk_sock.so.10.0 00:02:10.480 SYMLINK libspdk_sock.so 00:02:10.739 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.739 CC lib/nvme/nvme_ctrlr.o 00:02:10.739 CC lib/nvme/nvme_fabric.o 00:02:10.739 CC lib/nvme/nvme_ns_cmd.o 00:02:10.739 CC lib/nvme/nvme_ns.o 00:02:10.739 CC lib/nvme/nvme_pcie_common.o 00:02:10.739 CC lib/nvme/nvme_pcie.o 00:02:10.739 CC lib/nvme/nvme_qpair.o 00:02:10.739 CC lib/nvme/nvme.o 00:02:10.739 CC lib/nvme/nvme_quirks.o 00:02:10.739 CC lib/nvme/nvme_transport.o 00:02:10.739 CC lib/nvme/nvme_discovery.o 00:02:10.739 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.739 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.739 CC lib/nvme/nvme_tcp.o 00:02:10.739 CC lib/nvme/nvme_opal.o 00:02:10.739 CC lib/nvme/nvme_io_msg.o 00:02:10.739 CC lib/nvme/nvme_poll_group.o 00:02:10.739 CC lib/nvme/nvme_zns.o 00:02:10.739 CC lib/nvme/nvme_stubs.o 00:02:10.739 CC lib/nvme/nvme_auth.o 00:02:10.739 CC lib/nvme/nvme_cuse.o 00:02:10.739 CC lib/nvme/nvme_vfio_user.o 00:02:10.739 CC lib/nvme/nvme_rdma.o 00:02:10.997 LIB libspdk_thread.a 00:02:10.997 SO libspdk_thread.so.11.0 00:02:11.255 SYMLINK libspdk_thread.so 00:02:11.514 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.514 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.514 CC lib/fsdev/fsdev.o 00:02:11.514 CC lib/fsdev/fsdev_io.o 00:02:11.514 CC lib/fsdev/fsdev_rpc.o 00:02:11.514 CC lib/accel/accel.o 00:02:11.514 CC lib/accel/accel_rpc.o 00:02:11.514 CC lib/accel/accel_sw.o 00:02:11.514 CC lib/init/json_config.o 00:02:11.514 CC lib/init/subsystem.o 00:02:11.514 CC lib/blob/blobstore.o 00:02:11.514 CC lib/virtio/virtio.o 00:02:11.514 CC lib/init/subsystem_rpc.o 00:02:11.514 CC lib/blob/blob_bs_dev.o 00:02:11.514 CC lib/virtio/virtio_vhost_user.o 00:02:11.514 CC lib/blob/request.o 00:02:11.514 CC lib/init/rpc.o 00:02:11.514 CC lib/blob/zeroes.o 00:02:11.514 CC lib/virtio/virtio_vfio_user.o 00:02:11.514 CC lib/virtio/virtio_pci.o 00:02:11.772 LIB libspdk_init.a 00:02:11.772 SO libspdk_init.so.6.0 00:02:11.772 LIB libspdk_vfu_tgt.a 00:02:11.772 LIB libspdk_virtio.a 00:02:11.773 SO libspdk_vfu_tgt.so.3.0 00:02:11.773 SYMLINK libspdk_init.so 00:02:11.773 SO libspdk_virtio.so.7.0 00:02:11.773 SYMLINK libspdk_vfu_tgt.so 00:02:11.773 SYMLINK libspdk_virtio.so 00:02:12.032 LIB libspdk_fsdev.a 00:02:12.032 SO libspdk_fsdev.so.2.0 00:02:12.032 CC lib/event/app.o 00:02:12.032 CC lib/event/reactor.o 00:02:12.032 CC lib/event/log_rpc.o 00:02:12.032 CC lib/event/app_rpc.o 00:02:12.032 CC lib/event/scheduler_static.o 00:02:12.032 SYMLINK libspdk_fsdev.so 00:02:12.291 LIB libspdk_accel.a 00:02:12.291 SO libspdk_accel.so.16.0 00:02:12.291 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:12.291 SYMLINK libspdk_accel.so 00:02:12.291 LIB libspdk_nvme.a 00:02:12.291 LIB libspdk_event.a 00:02:12.550 SO libspdk_event.so.14.0 00:02:12.550 SO libspdk_nvme.so.15.0 00:02:12.550 SYMLINK libspdk_event.so 00:02:12.550 CC lib/bdev/bdev.o 00:02:12.550 CC lib/bdev/bdev_rpc.o 00:02:12.550 CC lib/bdev/bdev_zone.o 00:02:12.550 CC lib/bdev/part.o 00:02:12.550 CC lib/bdev/scsi_nvme.o 00:02:12.550 SYMLINK libspdk_nvme.so 00:02:12.809 LIB libspdk_fuse_dispatcher.a 00:02:12.809 SO libspdk_fuse_dispatcher.so.1.0 00:02:12.809 SYMLINK libspdk_fuse_dispatcher.so 00:02:13.745 LIB libspdk_blob.a 00:02:13.745 SO libspdk_blob.so.12.0 00:02:13.745 SYMLINK libspdk_blob.so 00:02:14.004 CC lib/blobfs/blobfs.o 00:02:14.004 CC lib/blobfs/tree.o 00:02:14.004 CC lib/lvol/lvol.o 00:02:14.572 LIB libspdk_bdev.a 00:02:14.572 SO libspdk_bdev.so.17.0 00:02:14.572 LIB libspdk_blobfs.a 00:02:14.572 SO libspdk_blobfs.so.11.0 00:02:14.572 LIB libspdk_lvol.a 00:02:14.572 SYMLINK libspdk_bdev.so 00:02:14.572 SYMLINK libspdk_blobfs.so 00:02:14.572 SO libspdk_lvol.so.11.0 00:02:14.830 SYMLINK libspdk_lvol.so 00:02:15.091 CC lib/scsi/dev.o 00:02:15.091 CC lib/scsi/lun.o 00:02:15.091 CC lib/scsi/port.o 00:02:15.091 CC lib/scsi/scsi_bdev.o 00:02:15.091 CC lib/scsi/scsi.o 00:02:15.091 CC lib/scsi/scsi_pr.o 00:02:15.091 CC lib/scsi/scsi_rpc.o 00:02:15.091 CC lib/scsi/task.o 00:02:15.091 CC lib/nbd/nbd.o 00:02:15.091 CC lib/nbd/nbd_rpc.o 00:02:15.091 CC lib/nvmf/ctrlr.o 00:02:15.091 CC lib/nvmf/ctrlr_discovery.o 00:02:15.091 CC lib/nvmf/ctrlr_bdev.o 00:02:15.091 CC lib/nvmf/subsystem.o 00:02:15.091 CC lib/nvmf/nvmf.o 00:02:15.091 CC lib/ublk/ublk.o 00:02:15.091 CC lib/ublk/ublk_rpc.o 00:02:15.091 CC lib/nvmf/nvmf_rpc.o 00:02:15.091 CC lib/nvmf/transport.o 00:02:15.091 CC lib/nvmf/tcp.o 00:02:15.091 CC lib/nvmf/stubs.o 00:02:15.091 CC lib/nvmf/mdns_server.o 00:02:15.091 CC lib/nvmf/vfio_user.o 00:02:15.091 CC lib/nvmf/rdma.o 00:02:15.091 CC lib/ftl/ftl_core.o 00:02:15.091 CC lib/nvmf/auth.o 00:02:15.091 CC lib/ftl/ftl_init.o 00:02:15.091 CC lib/ftl/ftl_layout.o 00:02:15.091 CC lib/ftl/ftl_debug.o 00:02:15.091 CC lib/ftl/ftl_io.o 00:02:15.091 CC lib/ftl/ftl_sb.o 00:02:15.091 CC lib/ftl/ftl_l2p.o 00:02:15.091 CC lib/ftl/ftl_l2p_flat.o 00:02:15.091 CC lib/ftl/ftl_band.o 00:02:15.091 CC lib/ftl/ftl_nv_cache.o 00:02:15.091 CC lib/ftl/ftl_band_ops.o 00:02:15.091 CC lib/ftl/ftl_rq.o 00:02:15.091 CC lib/ftl/ftl_writer.o 00:02:15.091 CC lib/ftl/ftl_reloc.o 00:02:15.091 CC lib/ftl/ftl_l2p_cache.o 00:02:15.091 CC lib/ftl/ftl_p2l.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.091 CC lib/ftl/ftl_p2l_log.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.091 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.091 CC lib/ftl/utils/ftl_conf.o 00:02:15.091 CC lib/ftl/utils/ftl_mempool.o 00:02:15.091 CC lib/ftl/utils/ftl_property.o 00:02:15.091 CC lib/ftl/utils/ftl_md.o 00:02:15.091 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.091 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.091 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.091 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.091 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.091 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.091 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.091 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.091 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.091 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.091 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:15.091 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.091 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.091 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.091 CC lib/ftl/ftl_trace.o 00:02:15.091 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:15.091 CC lib/ftl/base/ftl_base_dev.o 00:02:15.659 LIB libspdk_scsi.a 00:02:15.659 LIB libspdk_nbd.a 00:02:15.659 SO libspdk_scsi.so.9.0 00:02:15.659 SO libspdk_nbd.so.7.0 00:02:15.659 SYMLINK libspdk_scsi.so 00:02:15.659 SYMLINK libspdk_nbd.so 00:02:15.659 LIB libspdk_ublk.a 00:02:15.917 SO libspdk_ublk.so.3.0 00:02:15.917 SYMLINK libspdk_ublk.so 00:02:15.917 LIB libspdk_ftl.a 00:02:16.176 CC lib/iscsi/conn.o 00:02:16.176 CC lib/vhost/vhost.o 00:02:16.176 CC lib/iscsi/init_grp.o 00:02:16.176 CC lib/iscsi/iscsi.o 00:02:16.176 CC lib/vhost/vhost_rpc.o 00:02:16.176 CC lib/iscsi/param.o 00:02:16.176 CC lib/vhost/vhost_scsi.o 00:02:16.176 CC lib/iscsi/portal_grp.o 00:02:16.176 CC lib/vhost/vhost_blk.o 00:02:16.176 CC lib/iscsi/tgt_node.o 00:02:16.176 CC lib/iscsi/iscsi_subsystem.o 00:02:16.176 CC lib/vhost/rte_vhost_user.o 00:02:16.176 CC lib/iscsi/iscsi_rpc.o 00:02:16.176 CC lib/iscsi/task.o 00:02:16.176 SO libspdk_ftl.so.9.0 00:02:16.435 SYMLINK libspdk_ftl.so 00:02:17.003 LIB libspdk_nvmf.a 00:02:17.003 LIB libspdk_vhost.a 00:02:17.003 SO libspdk_vhost.so.8.0 00:02:17.003 SO libspdk_nvmf.so.20.0 00:02:17.003 SYMLINK libspdk_vhost.so 00:02:17.003 LIB libspdk_iscsi.a 00:02:17.003 SYMLINK libspdk_nvmf.so 00:02:17.003 SO libspdk_iscsi.so.8.0 00:02:17.262 SYMLINK libspdk_iscsi.so 00:02:17.831 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.831 CC module/vfu_device/vfu_virtio.o 00:02:17.831 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.831 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.831 CC module/vfu_device/vfu_virtio_rpc.o 00:02:17.831 CC module/vfu_device/vfu_virtio_fs.o 00:02:17.831 CC module/keyring/linux/keyring.o 00:02:17.831 CC module/keyring/linux/keyring_rpc.o 00:02:17.831 CC module/sock/posix/posix.o 00:02:17.831 CC module/keyring/file/keyring.o 00:02:17.831 CC module/keyring/file/keyring_rpc.o 00:02:17.831 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.831 LIB libspdk_env_dpdk_rpc.a 00:02:17.831 CC module/accel/dsa/accel_dsa.o 00:02:17.831 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.831 CC module/accel/iaa/accel_iaa.o 00:02:17.831 CC module/accel/ioat/accel_ioat.o 00:02:17.831 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.831 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.831 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.831 CC module/fsdev/aio/fsdev_aio.o 00:02:17.831 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:17.831 CC module/blob/bdev/blob_bdev.o 00:02:17.831 CC module/fsdev/aio/linux_aio_mgr.o 00:02:17.831 CC module/accel/error/accel_error.o 00:02:17.831 CC module/accel/error/accel_error_rpc.o 00:02:17.831 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.831 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.090 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.090 LIB libspdk_keyring_linux.a 00:02:18.090 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.090 LIB libspdk_keyring_file.a 00:02:18.090 LIB libspdk_scheduler_gscheduler.a 00:02:18.090 SO libspdk_keyring_linux.so.1.0 00:02:18.090 LIB libspdk_accel_ioat.a 00:02:18.090 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.090 LIB libspdk_scheduler_dynamic.a 00:02:18.090 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.090 SO libspdk_keyring_file.so.2.0 00:02:18.090 SYMLINK libspdk_keyring_linux.so 00:02:18.090 LIB libspdk_accel_iaa.a 00:02:18.090 LIB libspdk_accel_error.a 00:02:18.090 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.090 SO libspdk_accel_ioat.so.6.0 00:02:18.090 SO libspdk_accel_error.so.2.0 00:02:18.090 SO libspdk_accel_iaa.so.3.0 00:02:18.090 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.090 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.090 SYMLINK libspdk_keyring_file.so 00:02:18.090 LIB libspdk_blob_bdev.a 00:02:18.090 LIB libspdk_accel_dsa.a 00:02:18.090 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.090 SYMLINK libspdk_accel_ioat.so 00:02:18.090 SO libspdk_blob_bdev.so.12.0 00:02:18.090 SO libspdk_accel_dsa.so.5.0 00:02:18.349 SYMLINK libspdk_accel_error.so 00:02:18.349 SYMLINK libspdk_accel_iaa.so 00:02:18.349 SYMLINK libspdk_blob_bdev.so 00:02:18.349 LIB libspdk_vfu_device.a 00:02:18.349 SYMLINK libspdk_accel_dsa.so 00:02:18.349 SO libspdk_vfu_device.so.3.0 00:02:18.349 SYMLINK libspdk_vfu_device.so 00:02:18.349 LIB libspdk_fsdev_aio.a 00:02:18.608 LIB libspdk_sock_posix.a 00:02:18.608 SO libspdk_fsdev_aio.so.1.0 00:02:18.608 SO libspdk_sock_posix.so.6.0 00:02:18.608 SYMLINK libspdk_fsdev_aio.so 00:02:18.608 SYMLINK libspdk_sock_posix.so 00:02:18.608 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.608 CC module/bdev/error/vbdev_error.o 00:02:18.608 CC module/bdev/raid/bdev_raid.o 00:02:18.608 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.608 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.608 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.608 CC module/bdev/split/vbdev_split.o 00:02:18.608 CC module/bdev/raid/raid0.o 00:02:18.608 CC module/bdev/raid/raid1.o 00:02:18.608 CC module/bdev/raid/concat.o 00:02:18.608 CC module/bdev/aio/bdev_aio.o 00:02:18.866 CC module/bdev/delay/vbdev_delay.o 00:02:18.866 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.866 CC module/bdev/gpt/gpt.o 00:02:18.866 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.866 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.866 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.866 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.866 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.866 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.866 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.866 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.866 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.866 CC module/bdev/null/bdev_null.o 00:02:18.866 CC module/bdev/nvme/bdev_nvme.o 00:02:18.866 CC module/bdev/null/bdev_null_rpc.o 00:02:18.866 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.866 CC module/bdev/nvme/nvme_rpc.o 00:02:18.866 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.866 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.866 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.866 CC module/bdev/ftl/bdev_ftl.o 00:02:18.866 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.866 CC module/bdev/malloc/bdev_malloc.o 00:02:18.866 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.866 CC module/bdev/nvme/vbdev_opal.o 00:02:18.866 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.866 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.866 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.866 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.866 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.866 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.124 LIB libspdk_bdev_split.a 00:02:19.124 LIB libspdk_blobfs_bdev.a 00:02:19.124 SO libspdk_bdev_split.so.6.0 00:02:19.124 SO libspdk_blobfs_bdev.so.6.0 00:02:19.124 LIB libspdk_bdev_null.a 00:02:19.124 LIB libspdk_bdev_error.a 00:02:19.124 SO libspdk_bdev_null.so.6.0 00:02:19.124 SYMLINK libspdk_bdev_split.so 00:02:19.124 LIB libspdk_bdev_aio.a 00:02:19.124 LIB libspdk_bdev_ftl.a 00:02:19.124 LIB libspdk_bdev_gpt.a 00:02:19.124 SYMLINK libspdk_blobfs_bdev.so 00:02:19.124 SO libspdk_bdev_error.so.6.0 00:02:19.124 LIB libspdk_bdev_passthru.a 00:02:19.124 SO libspdk_bdev_aio.so.6.0 00:02:19.124 SO libspdk_bdev_ftl.so.6.0 00:02:19.124 SO libspdk_bdev_gpt.so.6.0 00:02:19.124 LIB libspdk_bdev_iscsi.a 00:02:19.124 LIB libspdk_bdev_delay.a 00:02:19.124 SYMLINK libspdk_bdev_null.so 00:02:19.124 LIB libspdk_bdev_zone_block.a 00:02:19.124 SO libspdk_bdev_passthru.so.6.0 00:02:19.124 LIB libspdk_bdev_malloc.a 00:02:19.124 SO libspdk_bdev_iscsi.so.6.0 00:02:19.124 SYMLINK libspdk_bdev_error.so 00:02:19.124 SO libspdk_bdev_delay.so.6.0 00:02:19.124 SO libspdk_bdev_zone_block.so.6.0 00:02:19.124 SO libspdk_bdev_malloc.so.6.0 00:02:19.124 SYMLINK libspdk_bdev_gpt.so 00:02:19.124 SYMLINK libspdk_bdev_aio.so 00:02:19.124 SYMLINK libspdk_bdev_ftl.so 00:02:19.124 SYMLINK libspdk_bdev_passthru.so 00:02:19.383 SYMLINK libspdk_bdev_iscsi.so 00:02:19.383 SYMLINK libspdk_bdev_zone_block.so 00:02:19.383 SYMLINK libspdk_bdev_delay.so 00:02:19.383 SYMLINK libspdk_bdev_malloc.so 00:02:19.383 LIB libspdk_bdev_virtio.a 00:02:19.383 LIB libspdk_bdev_lvol.a 00:02:19.383 SO libspdk_bdev_virtio.so.6.0 00:02:19.383 SO libspdk_bdev_lvol.so.6.0 00:02:19.383 SYMLINK libspdk_bdev_virtio.so 00:02:19.383 SYMLINK libspdk_bdev_lvol.so 00:02:19.641 LIB libspdk_bdev_raid.a 00:02:19.641 SO libspdk_bdev_raid.so.6.0 00:02:19.641 SYMLINK libspdk_bdev_raid.so 00:02:20.578 LIB libspdk_bdev_nvme.a 00:02:20.837 SO libspdk_bdev_nvme.so.7.1 00:02:20.837 SYMLINK libspdk_bdev_nvme.so 00:02:21.406 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.406 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.406 CC module/event/subsystems/vmd/vmd.o 00:02:21.406 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.406 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.406 CC module/event/subsystems/sock/sock.o 00:02:21.406 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.406 CC module/event/subsystems/keyring/keyring.o 00:02:21.406 CC module/event/subsystems/fsdev/fsdev.o 00:02:21.406 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:21.664 LIB libspdk_event_sock.a 00:02:21.664 LIB libspdk_event_fsdev.a 00:02:21.664 LIB libspdk_event_iobuf.a 00:02:21.664 LIB libspdk_event_keyring.a 00:02:21.664 LIB libspdk_event_vhost_blk.a 00:02:21.664 LIB libspdk_event_vmd.a 00:02:21.664 LIB libspdk_event_scheduler.a 00:02:21.664 SO libspdk_event_fsdev.so.1.0 00:02:21.664 LIB libspdk_event_vfu_tgt.a 00:02:21.664 SO libspdk_event_sock.so.5.0 00:02:21.664 SO libspdk_event_iobuf.so.3.0 00:02:21.664 SO libspdk_event_keyring.so.1.0 00:02:21.664 SO libspdk_event_vhost_blk.so.3.0 00:02:21.664 SO libspdk_event_vmd.so.6.0 00:02:21.664 SO libspdk_event_scheduler.so.4.0 00:02:21.664 SO libspdk_event_vfu_tgt.so.3.0 00:02:21.664 SYMLINK libspdk_event_sock.so 00:02:21.664 SYMLINK libspdk_event_fsdev.so 00:02:21.664 SYMLINK libspdk_event_keyring.so 00:02:21.664 SYMLINK libspdk_event_vhost_blk.so 00:02:21.664 SYMLINK libspdk_event_iobuf.so 00:02:21.664 SYMLINK libspdk_event_vfu_tgt.so 00:02:21.664 SYMLINK libspdk_event_vmd.so 00:02:21.664 SYMLINK libspdk_event_scheduler.so 00:02:21.923 CC module/event/subsystems/accel/accel.o 00:02:22.182 LIB libspdk_event_accel.a 00:02:22.182 SO libspdk_event_accel.so.6.0 00:02:22.182 SYMLINK libspdk_event_accel.so 00:02:22.441 CC module/event/subsystems/bdev/bdev.o 00:02:22.699 LIB libspdk_event_bdev.a 00:02:22.699 SO libspdk_event_bdev.so.6.0 00:02:22.699 SYMLINK libspdk_event_bdev.so 00:02:23.266 CC module/event/subsystems/scsi/scsi.o 00:02:23.266 CC module/event/subsystems/ublk/ublk.o 00:02:23.266 CC module/event/subsystems/nbd/nbd.o 00:02:23.266 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.266 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.266 LIB libspdk_event_scsi.a 00:02:23.266 LIB libspdk_event_ublk.a 00:02:23.266 LIB libspdk_event_nbd.a 00:02:23.266 SO libspdk_event_scsi.so.6.0 00:02:23.266 SO libspdk_event_ublk.so.3.0 00:02:23.266 SO libspdk_event_nbd.so.6.0 00:02:23.266 LIB libspdk_event_nvmf.a 00:02:23.266 SYMLINK libspdk_event_scsi.so 00:02:23.266 SO libspdk_event_nvmf.so.6.0 00:02:23.266 SYMLINK libspdk_event_ublk.so 00:02:23.266 SYMLINK libspdk_event_nbd.so 00:02:23.527 SYMLINK libspdk_event_nvmf.so 00:02:23.527 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.791 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.791 LIB libspdk_event_vhost_scsi.a 00:02:23.791 LIB libspdk_event_iscsi.a 00:02:23.791 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.791 SO libspdk_event_iscsi.so.6.0 00:02:23.791 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.791 SYMLINK libspdk_event_iscsi.so 00:02:24.117 SO libspdk.so.6.0 00:02:24.117 SYMLINK libspdk.so 00:02:24.418 CXX app/trace/trace.o 00:02:24.418 CC app/spdk_lspci/spdk_lspci.o 00:02:24.418 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.418 CC test/rpc_client/rpc_client_test.o 00:02:24.418 CC app/spdk_nvme_perf/perf.o 00:02:24.418 CC app/spdk_nvme_identify/identify.o 00:02:24.418 CC app/spdk_top/spdk_top.o 00:02:24.418 CC app/trace_record/trace_record.o 00:02:24.418 TEST_HEADER include/spdk/accel.h 00:02:24.418 TEST_HEADER include/spdk/accel_module.h 00:02:24.418 TEST_HEADER include/spdk/base64.h 00:02:24.418 TEST_HEADER include/spdk/bdev.h 00:02:24.418 TEST_HEADER include/spdk/assert.h 00:02:24.418 TEST_HEADER include/spdk/barrier.h 00:02:24.418 TEST_HEADER include/spdk/bdev_module.h 00:02:24.418 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.418 TEST_HEADER include/spdk/bit_pool.h 00:02:24.418 TEST_HEADER include/spdk/bit_array.h 00:02:24.418 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.418 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.418 TEST_HEADER include/spdk/blob.h 00:02:24.418 TEST_HEADER include/spdk/blobfs.h 00:02:24.418 TEST_HEADER include/spdk/config.h 00:02:24.418 TEST_HEADER include/spdk/conf.h 00:02:24.418 TEST_HEADER include/spdk/cpuset.h 00:02:24.418 TEST_HEADER include/spdk/crc32.h 00:02:24.418 TEST_HEADER include/spdk/crc64.h 00:02:24.418 TEST_HEADER include/spdk/crc16.h 00:02:24.418 TEST_HEADER include/spdk/dif.h 00:02:24.418 TEST_HEADER include/spdk/endian.h 00:02:24.418 TEST_HEADER include/spdk/dma.h 00:02:24.418 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.418 TEST_HEADER include/spdk/env.h 00:02:24.418 TEST_HEADER include/spdk/event.h 00:02:24.418 TEST_HEADER include/spdk/fd.h 00:02:24.418 TEST_HEADER include/spdk/file.h 00:02:24.418 TEST_HEADER include/spdk/fd_group.h 00:02:24.418 TEST_HEADER include/spdk/fsdev.h 00:02:24.418 TEST_HEADER include/spdk/fsdev_module.h 00:02:24.418 TEST_HEADER include/spdk/ftl.h 00:02:24.418 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:24.418 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.418 TEST_HEADER include/spdk/hexlify.h 00:02:24.418 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.418 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.418 TEST_HEADER include/spdk/histogram_data.h 00:02:24.418 TEST_HEADER include/spdk/idxd.h 00:02:24.418 TEST_HEADER include/spdk/ioat.h 00:02:24.418 TEST_HEADER include/spdk/init.h 00:02:24.418 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.418 CC app/spdk_dd/spdk_dd.o 00:02:24.418 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.418 TEST_HEADER include/spdk/json.h 00:02:24.418 TEST_HEADER include/spdk/keyring.h 00:02:24.418 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.418 TEST_HEADER include/spdk/log.h 00:02:24.418 TEST_HEADER include/spdk/keyring_module.h 00:02:24.418 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.418 TEST_HEADER include/spdk/likely.h 00:02:24.418 TEST_HEADER include/spdk/lvol.h 00:02:24.418 TEST_HEADER include/spdk/memory.h 00:02:24.418 TEST_HEADER include/spdk/mmio.h 00:02:24.418 TEST_HEADER include/spdk/nbd.h 00:02:24.418 TEST_HEADER include/spdk/md5.h 00:02:24.418 TEST_HEADER include/spdk/notify.h 00:02:24.418 TEST_HEADER include/spdk/net.h 00:02:24.418 TEST_HEADER include/spdk/nvme.h 00:02:24.418 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.418 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.418 CC app/nvmf_tgt/nvmf_main.o 00:02:24.418 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.418 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.418 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.418 TEST_HEADER include/spdk/nvmf.h 00:02:24.418 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.418 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.418 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.418 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.418 TEST_HEADER include/spdk/opal.h 00:02:24.418 TEST_HEADER include/spdk/opal_spec.h 00:02:24.418 TEST_HEADER include/spdk/pci_ids.h 00:02:24.418 TEST_HEADER include/spdk/pipe.h 00:02:24.418 TEST_HEADER include/spdk/reduce.h 00:02:24.418 TEST_HEADER include/spdk/queue.h 00:02:24.418 TEST_HEADER include/spdk/scheduler.h 00:02:24.418 TEST_HEADER include/spdk/rpc.h 00:02:24.418 TEST_HEADER include/spdk/scsi.h 00:02:24.418 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.418 TEST_HEADER include/spdk/sock.h 00:02:24.418 CC app/spdk_tgt/spdk_tgt.o 00:02:24.418 TEST_HEADER include/spdk/stdinc.h 00:02:24.418 TEST_HEADER include/spdk/string.h 00:02:24.418 TEST_HEADER include/spdk/thread.h 00:02:24.418 TEST_HEADER include/spdk/trace.h 00:02:24.418 TEST_HEADER include/spdk/tree.h 00:02:24.418 TEST_HEADER include/spdk/ublk.h 00:02:24.418 TEST_HEADER include/spdk/trace_parser.h 00:02:24.418 TEST_HEADER include/spdk/util.h 00:02:24.418 TEST_HEADER include/spdk/uuid.h 00:02:24.418 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.418 TEST_HEADER include/spdk/version.h 00:02:24.418 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.418 TEST_HEADER include/spdk/vhost.h 00:02:24.418 TEST_HEADER include/spdk/vmd.h 00:02:24.418 TEST_HEADER include/spdk/xor.h 00:02:24.418 TEST_HEADER include/spdk/zipf.h 00:02:24.418 CXX test/cpp_headers/accel.o 00:02:24.418 CXX test/cpp_headers/accel_module.o 00:02:24.418 CXX test/cpp_headers/assert.o 00:02:24.418 CXX test/cpp_headers/barrier.o 00:02:24.418 CXX test/cpp_headers/base64.o 00:02:24.418 CXX test/cpp_headers/bdev.o 00:02:24.418 CXX test/cpp_headers/bdev_module.o 00:02:24.418 CXX test/cpp_headers/bdev_zone.o 00:02:24.418 CXX test/cpp_headers/bit_array.o 00:02:24.418 CXX test/cpp_headers/bit_pool.o 00:02:24.418 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.418 CXX test/cpp_headers/blobfs.o 00:02:24.418 CXX test/cpp_headers/blob_bdev.o 00:02:24.418 CXX test/cpp_headers/blob.o 00:02:24.418 CXX test/cpp_headers/conf.o 00:02:24.418 CXX test/cpp_headers/crc16.o 00:02:24.418 CXX test/cpp_headers/config.o 00:02:24.418 CXX test/cpp_headers/cpuset.o 00:02:24.418 CXX test/cpp_headers/crc32.o 00:02:24.418 CXX test/cpp_headers/crc64.o 00:02:24.713 CXX test/cpp_headers/dma.o 00:02:24.713 CXX test/cpp_headers/endian.o 00:02:24.713 CXX test/cpp_headers/env_dpdk.o 00:02:24.713 CXX test/cpp_headers/env.o 00:02:24.713 CXX test/cpp_headers/dif.o 00:02:24.713 CXX test/cpp_headers/fd_group.o 00:02:24.713 CXX test/cpp_headers/event.o 00:02:24.713 CXX test/cpp_headers/fd.o 00:02:24.713 CXX test/cpp_headers/fsdev.o 00:02:24.713 CXX test/cpp_headers/fsdev_module.o 00:02:24.713 CXX test/cpp_headers/file.o 00:02:24.713 CXX test/cpp_headers/ftl.o 00:02:24.713 CXX test/cpp_headers/gpt_spec.o 00:02:24.713 CXX test/cpp_headers/fuse_dispatcher.o 00:02:24.713 CXX test/cpp_headers/histogram_data.o 00:02:24.713 CXX test/cpp_headers/hexlify.o 00:02:24.713 CXX test/cpp_headers/idxd.o 00:02:24.713 CXX test/cpp_headers/idxd_spec.o 00:02:24.713 CXX test/cpp_headers/init.o 00:02:24.713 CXX test/cpp_headers/ioat.o 00:02:24.713 CXX test/cpp_headers/ioat_spec.o 00:02:24.713 CXX test/cpp_headers/jsonrpc.o 00:02:24.713 CXX test/cpp_headers/json.o 00:02:24.713 CXX test/cpp_headers/keyring.o 00:02:24.713 CXX test/cpp_headers/iscsi_spec.o 00:02:24.713 CXX test/cpp_headers/keyring_module.o 00:02:24.713 CXX test/cpp_headers/likely.o 00:02:24.713 CXX test/cpp_headers/log.o 00:02:24.713 CXX test/cpp_headers/md5.o 00:02:24.713 CXX test/cpp_headers/lvol.o 00:02:24.713 CXX test/cpp_headers/mmio.o 00:02:24.713 CXX test/cpp_headers/memory.o 00:02:24.713 CXX test/cpp_headers/net.o 00:02:24.713 CXX test/cpp_headers/notify.o 00:02:24.713 CXX test/cpp_headers/nvme.o 00:02:24.713 CXX test/cpp_headers/nbd.o 00:02:24.713 CXX test/cpp_headers/nvme_intel.o 00:02:24.713 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.713 CXX test/cpp_headers/nvme_spec.o 00:02:24.713 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.713 CXX test/cpp_headers/nvme_zns.o 00:02:24.713 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.713 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.713 CXX test/cpp_headers/nvmf_spec.o 00:02:24.713 CXX test/cpp_headers/nvmf.o 00:02:24.713 CXX test/cpp_headers/nvmf_transport.o 00:02:24.713 CC examples/ioat/perf/perf.o 00:02:24.713 CC examples/ioat/verify/verify.o 00:02:24.713 CXX test/cpp_headers/opal.o 00:02:24.713 CC test/env/vtophys/vtophys.o 00:02:24.713 CC test/env/memory/memory_ut.o 00:02:24.713 CC test/app/jsoncat/jsoncat.o 00:02:24.713 CC app/fio/nvme/fio_plugin.o 00:02:24.713 CC examples/util/zipf/zipf.o 00:02:24.713 CC test/env/pci/pci_ut.o 00:02:24.713 CC test/app/stub/stub.o 00:02:24.713 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.713 CC test/thread/poller_perf/poller_perf.o 00:02:24.713 CXX test/cpp_headers/opal_spec.o 00:02:24.713 CC test/app/histogram_perf/histogram_perf.o 00:02:24.713 LINK spdk_lspci 00:02:24.713 CC test/dma/test_dma/test_dma.o 00:02:24.713 CC app/fio/bdev/fio_plugin.o 00:02:24.713 CC test/app/bdev_svc/bdev_svc.o 00:02:24.999 LINK spdk_nvme_discover 00:02:24.999 LINK interrupt_tgt 00:02:24.999 LINK nvmf_tgt 00:02:24.999 LINK rpc_client_test 00:02:24.999 LINK iscsi_tgt 00:02:25.259 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.259 LINK jsoncat 00:02:25.259 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.259 LINK spdk_trace_record 00:02:25.259 LINK zipf 00:02:25.259 CXX test/cpp_headers/pci_ids.o 00:02:25.259 CXX test/cpp_headers/pipe.o 00:02:25.259 CXX test/cpp_headers/queue.o 00:02:25.259 CXX test/cpp_headers/reduce.o 00:02:25.259 CXX test/cpp_headers/rpc.o 00:02:25.259 CXX test/cpp_headers/scheduler.o 00:02:25.259 CXX test/cpp_headers/scsi.o 00:02:25.259 CXX test/cpp_headers/scsi_spec.o 00:02:25.259 CXX test/cpp_headers/sock.o 00:02:25.259 CXX test/cpp_headers/stdinc.o 00:02:25.259 CXX test/cpp_headers/string.o 00:02:25.259 LINK verify 00:02:25.259 CXX test/cpp_headers/thread.o 00:02:25.259 CXX test/cpp_headers/trace.o 00:02:25.259 CXX test/cpp_headers/trace_parser.o 00:02:25.259 CXX test/cpp_headers/tree.o 00:02:25.259 LINK vtophys 00:02:25.259 CXX test/cpp_headers/ublk.o 00:02:25.259 CXX test/cpp_headers/util.o 00:02:25.259 CXX test/cpp_headers/uuid.o 00:02:25.259 CXX test/cpp_headers/version.o 00:02:25.259 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.259 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.259 LINK histogram_perf 00:02:25.259 CXX test/cpp_headers/vhost.o 00:02:25.259 CXX test/cpp_headers/vmd.o 00:02:25.259 CXX test/cpp_headers/xor.o 00:02:25.259 CXX test/cpp_headers/zipf.o 00:02:25.259 LINK poller_perf 00:02:25.259 LINK spdk_dd 00:02:25.259 LINK bdev_svc 00:02:25.259 LINK spdk_tgt 00:02:25.259 LINK env_dpdk_post_init 00:02:25.259 LINK stub 00:02:25.259 LINK spdk_trace 00:02:25.259 LINK ioat_perf 00:02:25.517 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.517 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.517 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.517 LINK spdk_nvme 00:02:25.517 LINK pci_ut 00:02:25.517 LINK spdk_bdev 00:02:25.517 LINK test_dma 00:02:25.775 CC app/vhost/vhost.o 00:02:25.775 LINK spdk_nvme_perf 00:02:25.775 CC examples/idxd/perf/perf.o 00:02:25.775 CC examples/sock/hello_world/hello_sock.o 00:02:25.775 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.775 CC test/event/reactor/reactor.o 00:02:25.775 LINK nvme_fuzz 00:02:25.775 CC examples/vmd/led/led.o 00:02:25.775 CC test/event/event_perf/event_perf.o 00:02:25.775 CC test/event/reactor_perf/reactor_perf.o 00:02:25.775 CC examples/thread/thread/thread_ex.o 00:02:25.775 CC test/event/app_repeat/app_repeat.o 00:02:25.775 CC test/event/scheduler/scheduler.o 00:02:25.775 LINK vhost_fuzz 00:02:25.775 LINK spdk_top 00:02:25.775 LINK spdk_nvme_identify 00:02:26.033 LINK lsvmd 00:02:26.033 LINK vhost 00:02:26.033 LINK reactor 00:02:26.033 LINK reactor_perf 00:02:26.033 LINK led 00:02:26.033 LINK event_perf 00:02:26.033 LINK mem_callbacks 00:02:26.033 LINK hello_sock 00:02:26.033 LINK app_repeat 00:02:26.033 LINK thread 00:02:26.033 LINK idxd_perf 00:02:26.033 LINK scheduler 00:02:26.033 CC test/nvme/aer/aer.o 00:02:26.033 CC test/nvme/reset/reset.o 00:02:26.033 CC test/nvme/reserve/reserve.o 00:02:26.033 CC test/nvme/startup/startup.o 00:02:26.033 CC test/nvme/err_injection/err_injection.o 00:02:26.033 CC test/nvme/overhead/overhead.o 00:02:26.033 CC test/nvme/sgl/sgl.o 00:02:26.033 CC test/nvme/fdp/fdp.o 00:02:26.033 CC test/nvme/compliance/nvme_compliance.o 00:02:26.033 CC test/nvme/e2edp/nvme_dp.o 00:02:26.033 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.033 CC test/nvme/boot_partition/boot_partition.o 00:02:26.033 CC test/nvme/connect_stress/connect_stress.o 00:02:26.033 CC test/nvme/simple_copy/simple_copy.o 00:02:26.033 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.033 CC test/nvme/cuse/cuse.o 00:02:26.033 CC test/accel/dif/dif.o 00:02:26.033 CC test/blobfs/mkfs/mkfs.o 00:02:26.291 LINK memory_ut 00:02:26.291 CC test/lvol/esnap/esnap.o 00:02:26.291 LINK startup 00:02:26.291 LINK err_injection 00:02:26.291 LINK boot_partition 00:02:26.291 LINK reserve 00:02:26.291 LINK doorbell_aers 00:02:26.291 LINK connect_stress 00:02:26.291 LINK fused_ordering 00:02:26.291 LINK reset 00:02:26.291 LINK simple_copy 00:02:26.291 LINK aer 00:02:26.291 LINK mkfs 00:02:26.291 LINK sgl 00:02:26.291 LINK nvme_dp 00:02:26.291 LINK overhead 00:02:26.549 CC examples/nvme/reconnect/reconnect.o 00:02:26.549 CC examples/nvme/hello_world/hello_world.o 00:02:26.549 CC examples/nvme/hotplug/hotplug.o 00:02:26.549 CC examples/nvme/arbitration/arbitration.o 00:02:26.549 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.549 LINK nvme_compliance 00:02:26.549 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.549 CC examples/nvme/abort/abort.o 00:02:26.549 LINK fdp 00:02:26.549 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.549 CC examples/accel/perf/accel_perf.o 00:02:26.549 CC examples/blob/hello_world/hello_blob.o 00:02:26.549 CC examples/blob/cli/blobcli.o 00:02:26.549 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:26.549 LINK cmb_copy 00:02:26.550 LINK pmr_persistence 00:02:26.550 LINK hello_world 00:02:26.550 LINK hotplug 00:02:26.807 LINK arbitration 00:02:26.807 LINK reconnect 00:02:26.807 LINK abort 00:02:26.807 LINK dif 00:02:26.807 LINK hello_blob 00:02:26.807 LINK nvme_manage 00:02:26.807 LINK hello_fsdev 00:02:26.807 LINK iscsi_fuzz 00:02:27.065 LINK accel_perf 00:02:27.065 LINK blobcli 00:02:27.323 LINK cuse 00:02:27.324 CC test/bdev/bdevio/bdevio.o 00:02:27.324 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.324 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.582 LINK bdevio 00:02:27.582 LINK hello_bdev 00:02:28.150 LINK bdevperf 00:02:28.408 CC examples/nvmf/nvmf/nvmf.o 00:02:28.976 LINK nvmf 00:02:29.913 LINK esnap 00:02:30.173 00:02:30.173 real 0m55.680s 00:02:30.173 user 8m19.456s 00:02:30.173 sys 3m51.951s 00:02:30.173 14:55:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.173 14:55:31 make -- common/autotest_common.sh@10 -- $ set +x 00:02:30.173 ************************************ 00:02:30.173 END TEST make 00:02:30.173 ************************************ 00:02:30.173 14:55:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:30.173 14:55:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:30.173 14:55:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:30.173 14:55:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.173 14:55:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:30.173 14:55:31 -- pm/common@44 -- $ pid=1154555 00:02:30.173 14:55:31 -- pm/common@50 -- $ kill -TERM 1154555 00:02:30.173 14:55:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.173 14:55:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:30.173 14:55:31 -- pm/common@44 -- $ pid=1154556 00:02:30.173 14:55:31 -- pm/common@50 -- $ kill -TERM 1154556 00:02:30.173 14:55:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.173 14:55:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:30.173 14:55:31 -- pm/common@44 -- $ pid=1154558 00:02:30.173 14:55:31 -- pm/common@50 -- $ kill -TERM 1154558 00:02:30.173 14:55:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.173 14:55:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:30.173 14:55:31 -- pm/common@44 -- $ pid=1154583 00:02:30.173 14:55:31 -- pm/common@50 -- $ sudo -E kill -TERM 1154583 00:02:30.432 14:55:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:30.432 14:55:31 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:30.432 14:55:32 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:30.432 14:55:32 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:30.432 14:55:32 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:30.432 14:55:32 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:30.432 14:55:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:30.432 14:55:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:30.432 14:55:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:30.432 14:55:32 -- scripts/common.sh@336 -- # IFS=.-: 00:02:30.432 14:55:32 -- scripts/common.sh@336 -- # read -ra ver1 00:02:30.432 14:55:32 -- scripts/common.sh@337 -- # IFS=.-: 00:02:30.432 14:55:32 -- scripts/common.sh@337 -- # read -ra ver2 00:02:30.432 14:55:32 -- scripts/common.sh@338 -- # local 'op=<' 00:02:30.432 14:55:32 -- scripts/common.sh@340 -- # ver1_l=2 00:02:30.432 14:55:32 -- scripts/common.sh@341 -- # ver2_l=1 00:02:30.432 14:55:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:30.432 14:55:32 -- scripts/common.sh@344 -- # case "$op" in 00:02:30.432 14:55:32 -- scripts/common.sh@345 -- # : 1 00:02:30.432 14:55:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:30.432 14:55:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.432 14:55:32 -- scripts/common.sh@365 -- # decimal 1 00:02:30.432 14:55:32 -- scripts/common.sh@353 -- # local d=1 00:02:30.432 14:55:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:30.432 14:55:32 -- scripts/common.sh@355 -- # echo 1 00:02:30.432 14:55:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:30.432 14:55:32 -- scripts/common.sh@366 -- # decimal 2 00:02:30.432 14:55:32 -- scripts/common.sh@353 -- # local d=2 00:02:30.432 14:55:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:30.432 14:55:32 -- scripts/common.sh@355 -- # echo 2 00:02:30.432 14:55:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:30.432 14:55:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:30.432 14:55:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:30.432 14:55:32 -- scripts/common.sh@368 -- # return 0 00:02:30.432 14:55:32 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:30.432 14:55:32 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.432 --rc genhtml_branch_coverage=1 00:02:30.432 --rc genhtml_function_coverage=1 00:02:30.432 --rc genhtml_legend=1 00:02:30.432 --rc geninfo_all_blocks=1 00:02:30.432 --rc geninfo_unexecuted_blocks=1 00:02:30.432 00:02:30.432 ' 00:02:30.432 14:55:32 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.432 --rc genhtml_branch_coverage=1 00:02:30.432 --rc genhtml_function_coverage=1 00:02:30.432 --rc genhtml_legend=1 00:02:30.432 --rc geninfo_all_blocks=1 00:02:30.432 --rc geninfo_unexecuted_blocks=1 00:02:30.432 00:02:30.432 ' 00:02:30.432 14:55:32 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.432 --rc genhtml_branch_coverage=1 00:02:30.432 --rc genhtml_function_coverage=1 00:02:30.432 --rc genhtml_legend=1 00:02:30.432 --rc geninfo_all_blocks=1 00:02:30.432 --rc geninfo_unexecuted_blocks=1 00:02:30.432 00:02:30.432 ' 00:02:30.432 14:55:32 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:30.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.432 --rc genhtml_branch_coverage=1 00:02:30.432 --rc genhtml_function_coverage=1 00:02:30.432 --rc genhtml_legend=1 00:02:30.432 --rc geninfo_all_blocks=1 00:02:30.432 --rc geninfo_unexecuted_blocks=1 00:02:30.432 00:02:30.432 ' 00:02:30.432 14:55:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:30.432 14:55:32 -- nvmf/common.sh@7 -- # uname -s 00:02:30.432 14:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.432 14:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.432 14:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.433 14:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.433 14:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.433 14:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.433 14:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.433 14:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.433 14:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.433 14:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.433 14:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:30.433 14:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:30.433 14:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.433 14:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.433 14:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:30.433 14:55:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:30.433 14:55:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.433 14:55:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:30.433 14:55:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.433 14:55:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.433 14:55:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.433 14:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.433 14:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.433 14:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.433 14:55:32 -- paths/export.sh@5 -- # export PATH 00:02:30.433 14:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.433 14:55:32 -- nvmf/common.sh@51 -- # : 0 00:02:30.433 14:55:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:30.433 14:55:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:30.433 14:55:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:30.433 14:55:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.433 14:55:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.433 14:55:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:30.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:30.433 14:55:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:30.433 14:55:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:30.433 14:55:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:30.433 14:55:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.433 14:55:32 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.433 14:55:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.433 14:55:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.433 14:55:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.433 14:55:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.433 14:55:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.433 14:55:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.433 14:55:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.433 14:55:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.433 14:55:32 -- spdk/autotest.sh@48 -- # udevadm_pid=1216876 00:02:30.433 14:55:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:30.433 14:55:32 -- pm/common@17 -- # local monitor 00:02:30.433 14:55:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.433 14:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.433 14:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.433 14:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.433 14:55:32 -- pm/common@21 -- # date +%s 00:02:30.433 14:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.433 14:55:32 -- pm/common@21 -- # date +%s 00:02:30.433 14:55:32 -- pm/common@25 -- # sleep 1 00:02:30.433 14:55:32 -- pm/common@21 -- # date +%s 00:02:30.433 14:55:32 -- pm/common@21 -- # date +%s 00:02:30.433 14:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733752532 00:02:30.433 14:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733752532 00:02:30.433 14:55:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733752532 00:02:30.692 14:55:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733752532 00:02:30.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733752532_collect-cpu-load.pm.log 00:02:30.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733752532_collect-vmstat.pm.log 00:02:30.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733752532_collect-cpu-temp.pm.log 00:02:30.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733752532_collect-bmc-pm.bmc.pm.log 00:02:31.629 14:55:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:31.629 14:55:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:31.629 14:55:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:31.629 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:02:31.629 14:55:33 -- spdk/autotest.sh@59 -- # create_test_list 00:02:31.629 14:55:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:31.629 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:02:31.629 14:55:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:31.629 14:55:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.629 14:55:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.629 14:55:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:31.629 14:55:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.629 14:55:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:31.629 14:55:33 -- common/autotest_common.sh@1457 -- # uname 00:02:31.629 14:55:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:31.629 14:55:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:31.629 14:55:33 -- common/autotest_common.sh@1477 -- # uname 00:02:31.629 14:55:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:31.629 14:55:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:31.629 14:55:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:31.629 lcov: LCOV version 1.15 00:02:31.629 14:55:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:43.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:58.720 14:55:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:58.720 14:55:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:58.720 14:55:57 -- common/autotest_common.sh@10 -- # set +x 00:02:58.720 14:55:57 -- spdk/autotest.sh@78 -- # rm -f 00:02:58.720 14:55:57 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.289 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:59.289 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:59.289 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.289 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.289 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.289 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.289 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.289 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.547 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.806 14:56:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:59.806 14:56:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:59.806 14:56:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:59.806 14:56:01 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:59.806 14:56:01 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:59.806 14:56:01 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:59.806 14:56:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:02:59.806 14:56:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:59.806 14:56:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:59.806 14:56:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:02:59.806 14:56:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:02:59.806 14:56:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:02:59.806 14:56:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:02:59.806 14:56:01 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:02:59.806 14:56:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:02:59.806 14:56:01 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:02:59.806 14:56:01 -- common/autotest_common.sh@1673 -- # continue 2 00:02:59.806 14:56:01 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:02:59.806 14:56:01 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:02:59.806 14:56:01 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:02:59.806 14:56:01 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:02:59.806 14:56:01 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:02:59.806 14:56:01 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:02:59.806 14:56:01 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:02:59.806 14:56:01 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:02:59.806 14:56:01 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:02:59.806 14:56:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.806 14:56:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:59.806 14:56:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:59.806 14:56:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:59.806 14:56:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:59.806 No valid GPT data, bailing 00:02:59.806 14:56:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.806 14:56:01 -- scripts/common.sh@394 -- # pt= 00:02:59.806 14:56:01 -- scripts/common.sh@395 -- # return 1 00:02:59.806 14:56:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:59.806 1+0 records in 00:02:59.806 1+0 records out 00:02:59.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531736 s, 197 MB/s 00:02:59.806 14:56:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.806 14:56:01 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:02:59.806 14:56:01 -- spdk/autotest.sh@99 -- # continue 00:02:59.806 14:56:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.806 14:56:01 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:02:59.806 14:56:01 -- spdk/autotest.sh@99 -- # continue 00:02:59.806 14:56:01 -- spdk/autotest.sh@105 -- # sync 00:02:59.806 14:56:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:59.806 14:56:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:59.806 14:56:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:05.078 14:56:06 -- spdk/autotest.sh@111 -- # uname -s 00:03:05.078 14:56:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:05.078 14:56:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:05.078 14:56:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:08.368 Hugepages 00:03:08.368 node hugesize free / total 00:03:08.368 node0 1048576kB 0 / 0 00:03:08.368 node0 2048kB 0 / 0 00:03:08.368 node1 1048576kB 0 / 0 00:03:08.368 node1 2048kB 0 / 0 00:03:08.368 00:03:08.368 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:08.368 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:08.368 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:08.368 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:08.368 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:08.368 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:08.368 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:08.368 14:56:10 -- spdk/autotest.sh@117 -- # uname -s 00:03:08.368 14:56:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:08.368 14:56:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:08.368 14:56:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.903 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:11.471 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.471 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.472 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.409 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.409 14:56:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:13.345 14:56:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:13.345 14:56:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:13.345 14:56:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:13.345 14:56:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:13.345 14:56:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:13.345 14:56:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:13.345 14:56:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.345 14:56:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:13.345 14:56:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:13.345 14:56:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:13.345 14:56:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:13.345 14:56:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.635 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:16.635 Waiting for block devices as requested 00:03:16.635 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:16.635 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:16.635 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:16.635 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:16.894 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:16.894 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:16.894 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:16.894 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:17.153 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:17.153 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:17.153 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:17.411 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:17.411 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:17.411 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:17.670 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:17.670 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:17.670 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:17.929 14:56:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:17.929 14:56:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:17.929 14:56:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:17.929 14:56:19 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:17.930 14:56:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:17.930 14:56:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:17.930 14:56:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:17.930 14:56:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:17.930 14:56:19 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:17.930 14:56:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:17.930 14:56:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:17.930 14:56:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:17.930 14:56:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:17.930 14:56:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:17.930 14:56:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:17.930 14:56:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:17.930 14:56:19 -- common/autotest_common.sh@1543 -- # continue 00:03:17.930 14:56:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:17.930 14:56:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:17.930 14:56:19 -- common/autotest_common.sh@10 -- # set +x 00:03:17.930 14:56:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:17.930 14:56:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.930 14:56:19 -- common/autotest_common.sh@10 -- # set +x 00:03:17.930 14:56:19 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.464 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:21.031 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.031 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.968 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.968 14:56:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:21.968 14:56:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.968 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:21.968 14:56:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:21.968 14:56:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:21.968 14:56:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:21.969 14:56:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:21.969 14:56:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:21.969 14:56:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:21.969 14:56:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:21.969 14:56:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:21.969 14:56:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:21.969 14:56:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:21.969 14:56:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.969 14:56:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:21.969 14:56:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:21.969 14:56:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:21.969 14:56:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:21.969 14:56:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:21.969 14:56:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:21.969 14:56:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:21.969 14:56:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:21.969 14:56:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:21.969 14:56:23 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:21.969 14:56:23 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:21.969 14:56:23 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:21.969 14:56:23 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1231381 00:03:21.969 14:56:23 -- common/autotest_common.sh@1585 -- # waitforlisten 1231381 00:03:21.969 14:56:23 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:21.969 14:56:23 -- common/autotest_common.sh@835 -- # '[' -z 1231381 ']' 00:03:21.969 14:56:23 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.969 14:56:23 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:21.969 14:56:23 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.969 14:56:23 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:21.969 14:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:22.228 [2024-12-09 14:56:23.802364] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:22.228 [2024-12-09 14:56:23.802421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231381 ] 00:03:22.228 [2024-12-09 14:56:23.880840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.228 [2024-12-09 14:56:23.922409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:22.487 14:56:24 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:22.487 14:56:24 -- common/autotest_common.sh@868 -- # return 0 00:03:22.487 14:56:24 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:22.487 14:56:24 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:22.487 14:56:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:25.779 nvme0n1 00:03:25.779 14:56:27 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:25.779 [2024-12-09 14:56:27.331886] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 1 00:03:25.779 [2024-12-09 14:56:27.331916] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 1 00:03:25.779 request: 00:03:25.779 { 00:03:25.779 "nvme_ctrlr_name": "nvme0", 00:03:25.779 "password": "test", 00:03:25.779 "method": "bdev_nvme_opal_revert", 00:03:25.779 "req_id": 1 00:03:25.779 } 00:03:25.779 Got JSON-RPC error response 00:03:25.779 response: 00:03:25.779 { 00:03:25.779 "code": -32603, 00:03:25.779 "message": "Internal error" 00:03:25.779 } 00:03:25.779 14:56:27 -- common/autotest_common.sh@1591 -- # true 00:03:25.779 14:56:27 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:25.779 14:56:27 -- common/autotest_common.sh@1595 -- # killprocess 1231381 00:03:25.779 14:56:27 -- common/autotest_common.sh@954 -- # '[' -z 1231381 ']' 00:03:25.779 14:56:27 -- common/autotest_common.sh@958 -- # kill -0 1231381 00:03:25.779 14:56:27 -- common/autotest_common.sh@959 -- # uname 00:03:25.779 14:56:27 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:25.779 14:56:27 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231381 00:03:25.779 14:56:27 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:25.779 14:56:27 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:25.779 14:56:27 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231381' 00:03:25.779 killing process with pid 1231381 00:03:25.779 14:56:27 -- common/autotest_common.sh@973 -- # kill 1231381 00:03:25.779 14:56:27 -- common/autotest_common.sh@978 -- # wait 1231381 00:03:27.683 14:56:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:27.683 14:56:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:27.683 14:56:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:27.683 14:56:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:27.683 14:56:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:27.683 14:56:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.683 14:56:28 -- common/autotest_common.sh@10 -- # set +x 00:03:27.683 14:56:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:27.683 14:56:29 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.683 14:56:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.683 14:56:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.683 14:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.683 ************************************ 00:03:27.683 START TEST env 00:03:27.683 ************************************ 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.683 * Looking for test storage... 00:03:27.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:27.683 14:56:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.683 14:56:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.683 14:56:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.683 14:56:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.683 14:56:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.683 14:56:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.683 14:56:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.683 14:56:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.683 14:56:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.683 14:56:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.683 14:56:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.683 14:56:29 env -- scripts/common.sh@344 -- # case "$op" in 00:03:27.683 14:56:29 env -- scripts/common.sh@345 -- # : 1 00:03:27.683 14:56:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.683 14:56:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.683 14:56:29 env -- scripts/common.sh@365 -- # decimal 1 00:03:27.683 14:56:29 env -- scripts/common.sh@353 -- # local d=1 00:03:27.683 14:56:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.683 14:56:29 env -- scripts/common.sh@355 -- # echo 1 00:03:27.683 14:56:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.683 14:56:29 env -- scripts/common.sh@366 -- # decimal 2 00:03:27.683 14:56:29 env -- scripts/common.sh@353 -- # local d=2 00:03:27.683 14:56:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.683 14:56:29 env -- scripts/common.sh@355 -- # echo 2 00:03:27.683 14:56:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.683 14:56:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.683 14:56:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.683 14:56:29 env -- scripts/common.sh@368 -- # return 0 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.683 --rc genhtml_branch_coverage=1 00:03:27.683 --rc genhtml_function_coverage=1 00:03:27.683 --rc genhtml_legend=1 00:03:27.683 --rc geninfo_all_blocks=1 00:03:27.683 --rc geninfo_unexecuted_blocks=1 00:03:27.683 00:03:27.683 ' 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.683 --rc genhtml_branch_coverage=1 00:03:27.683 --rc genhtml_function_coverage=1 00:03:27.683 --rc genhtml_legend=1 00:03:27.683 --rc geninfo_all_blocks=1 00:03:27.683 --rc geninfo_unexecuted_blocks=1 00:03:27.683 00:03:27.683 ' 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.683 --rc genhtml_branch_coverage=1 00:03:27.683 --rc genhtml_function_coverage=1 00:03:27.683 --rc genhtml_legend=1 00:03:27.683 --rc geninfo_all_blocks=1 00:03:27.683 --rc geninfo_unexecuted_blocks=1 00:03:27.683 00:03:27.683 ' 00:03:27.683 14:56:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:27.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.684 --rc genhtml_branch_coverage=1 00:03:27.684 --rc genhtml_function_coverage=1 00:03:27.684 --rc genhtml_legend=1 00:03:27.684 --rc geninfo_all_blocks=1 00:03:27.684 --rc geninfo_unexecuted_blocks=1 00:03:27.684 00:03:27.684 ' 00:03:27.684 14:56:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.684 14:56:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.684 14:56:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.684 14:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.684 ************************************ 00:03:27.684 START TEST env_memory 00:03:27.684 ************************************ 00:03:27.684 14:56:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.684 00:03:27.684 00:03:27.684 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.684 http://cunit.sourceforge.net/ 00:03:27.684 00:03:27.684 00:03:27.684 Suite: memory 00:03:27.684 Test: alloc and free memory map ...[2024-12-09 14:56:29.281825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:27.684 passed 00:03:27.684 Test: mem map translation ...[2024-12-09 14:56:29.299491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:27.684 [2024-12-09 14:56:29.299504] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:27.684 [2024-12-09 14:56:29.299538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:27.684 [2024-12-09 14:56:29.299543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:27.684 passed 00:03:27.684 Test: mem map registration ...[2024-12-09 14:56:29.335150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:27.684 [2024-12-09 14:56:29.335163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:27.684 passed 00:03:27.684 Test: mem map adjacent registrations ...passed 00:03:27.684 00:03:27.684 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.684 suites 1 1 n/a 0 0 00:03:27.684 tests 4 4 4 0 0 00:03:27.684 asserts 152 152 152 0 n/a 00:03:27.684 00:03:27.684 Elapsed time = 0.131 seconds 00:03:27.684 00:03:27.684 real 0m0.144s 00:03:27.684 user 0m0.136s 00:03:27.684 sys 0m0.007s 00:03:27.684 14:56:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.684 14:56:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:27.684 ************************************ 00:03:27.684 END TEST env_memory 00:03:27.684 ************************************ 00:03:27.684 14:56:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.684 14:56:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.684 14:56:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.684 14:56:29 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.684 ************************************ 00:03:27.684 START TEST env_vtophys 00:03:27.684 ************************************ 00:03:27.684 14:56:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.684 EAL: lib.eal log level changed from notice to debug 00:03:27.684 EAL: Detected lcore 0 as core 0 on socket 0 00:03:27.684 EAL: Detected lcore 1 as core 1 on socket 0 00:03:27.684 EAL: Detected lcore 2 as core 2 on socket 0 00:03:27.684 EAL: Detected lcore 3 as core 3 on socket 0 00:03:27.684 EAL: Detected lcore 4 as core 4 on socket 0 00:03:27.684 EAL: Detected lcore 5 as core 5 on socket 0 00:03:27.684 EAL: Detected lcore 6 as core 6 on socket 0 00:03:27.684 EAL: Detected lcore 7 as core 8 on socket 0 00:03:27.684 EAL: Detected lcore 8 as core 9 on socket 0 00:03:27.684 EAL: Detected lcore 9 as core 10 on socket 0 00:03:27.684 EAL: Detected lcore 10 as core 11 on socket 0 00:03:27.684 EAL: Detected lcore 11 as core 12 on socket 0 00:03:27.684 EAL: Detected lcore 12 as core 13 on socket 0 00:03:27.684 EAL: Detected lcore 13 as core 16 on socket 0 00:03:27.684 EAL: Detected lcore 14 as core 17 on socket 0 00:03:27.684 EAL: Detected lcore 15 as core 18 on socket 0 00:03:27.684 EAL: Detected lcore 16 as core 19 on socket 0 00:03:27.684 EAL: Detected lcore 17 as core 20 on socket 0 00:03:27.684 EAL: Detected lcore 18 as core 21 on socket 0 00:03:27.684 EAL: Detected lcore 19 as core 25 on socket 0 00:03:27.684 EAL: Detected lcore 20 as core 26 on socket 0 00:03:27.684 EAL: Detected lcore 21 as core 27 on socket 0 00:03:27.684 EAL: Detected lcore 22 as core 28 on socket 0 00:03:27.684 EAL: Detected lcore 23 as core 29 on socket 0 00:03:27.684 EAL: Detected lcore 24 as core 0 on socket 1 00:03:27.684 EAL: Detected lcore 25 as core 1 on socket 1 00:03:27.684 EAL: Detected lcore 26 as core 2 on socket 1 00:03:27.684 EAL: Detected lcore 27 as core 3 on socket 1 00:03:27.684 EAL: Detected lcore 28 as core 4 on socket 1 00:03:27.684 EAL: Detected lcore 29 as core 5 on socket 1 00:03:27.684 EAL: Detected lcore 30 as core 6 on socket 1 00:03:27.684 EAL: Detected lcore 31 as core 8 on socket 1 00:03:27.684 EAL: Detected lcore 32 as core 9 on socket 1 00:03:27.684 EAL: Detected lcore 33 as core 10 on socket 1 00:03:27.684 EAL: Detected lcore 34 as core 11 on socket 1 00:03:27.684 EAL: Detected lcore 35 as core 12 on socket 1 00:03:27.684 EAL: Detected lcore 36 as core 13 on socket 1 00:03:27.684 EAL: Detected lcore 37 as core 16 on socket 1 00:03:27.684 EAL: Detected lcore 38 as core 17 on socket 1 00:03:27.684 EAL: Detected lcore 39 as core 18 on socket 1 00:03:27.684 EAL: Detected lcore 40 as core 19 on socket 1 00:03:27.684 EAL: Detected lcore 41 as core 20 on socket 1 00:03:27.684 EAL: Detected lcore 42 as core 21 on socket 1 00:03:27.684 EAL: Detected lcore 43 as core 25 on socket 1 00:03:27.684 EAL: Detected lcore 44 as core 26 on socket 1 00:03:27.684 EAL: Detected lcore 45 as core 27 on socket 1 00:03:27.684 EAL: Detected lcore 46 as core 28 on socket 1 00:03:27.684 EAL: Detected lcore 47 as core 29 on socket 1 00:03:27.684 EAL: Detected lcore 48 as core 0 on socket 0 00:03:27.684 EAL: Detected lcore 49 as core 1 on socket 0 00:03:27.684 EAL: Detected lcore 50 as core 2 on socket 0 00:03:27.684 EAL: Detected lcore 51 as core 3 on socket 0 00:03:27.684 EAL: Detected lcore 52 as core 4 on socket 0 00:03:27.684 EAL: Detected lcore 53 as core 5 on socket 0 00:03:27.684 EAL: Detected lcore 54 as core 6 on socket 0 00:03:27.684 EAL: Detected lcore 55 as core 8 on socket 0 00:03:27.684 EAL: Detected lcore 56 as core 9 on socket 0 00:03:27.684 EAL: Detected lcore 57 as core 10 on socket 0 00:03:27.684 EAL: Detected lcore 58 as core 11 on socket 0 00:03:27.684 EAL: Detected lcore 59 as core 12 on socket 0 00:03:27.684 EAL: Detected lcore 60 as core 13 on socket 0 00:03:27.684 EAL: Detected lcore 61 as core 16 on socket 0 00:03:27.684 EAL: Detected lcore 62 as core 17 on socket 0 00:03:27.684 EAL: Detected lcore 63 as core 18 on socket 0 00:03:27.684 EAL: Detected lcore 64 as core 19 on socket 0 00:03:27.684 EAL: Detected lcore 65 as core 20 on socket 0 00:03:27.684 EAL: Detected lcore 66 as core 21 on socket 0 00:03:27.684 EAL: Detected lcore 67 as core 25 on socket 0 00:03:27.684 EAL: Detected lcore 68 as core 26 on socket 0 00:03:27.684 EAL: Detected lcore 69 as core 27 on socket 0 00:03:27.684 EAL: Detected lcore 70 as core 28 on socket 0 00:03:27.684 EAL: Detected lcore 71 as core 29 on socket 0 00:03:27.684 EAL: Detected lcore 72 as core 0 on socket 1 00:03:27.684 EAL: Detected lcore 73 as core 1 on socket 1 00:03:27.684 EAL: Detected lcore 74 as core 2 on socket 1 00:03:27.684 EAL: Detected lcore 75 as core 3 on socket 1 00:03:27.684 EAL: Detected lcore 76 as core 4 on socket 1 00:03:27.684 EAL: Detected lcore 77 as core 5 on socket 1 00:03:27.684 EAL: Detected lcore 78 as core 6 on socket 1 00:03:27.684 EAL: Detected lcore 79 as core 8 on socket 1 00:03:27.684 EAL: Detected lcore 80 as core 9 on socket 1 00:03:27.684 EAL: Detected lcore 81 as core 10 on socket 1 00:03:27.684 EAL: Detected lcore 82 as core 11 on socket 1 00:03:27.684 EAL: Detected lcore 83 as core 12 on socket 1 00:03:27.684 EAL: Detected lcore 84 as core 13 on socket 1 00:03:27.684 EAL: Detected lcore 85 as core 16 on socket 1 00:03:27.684 EAL: Detected lcore 86 as core 17 on socket 1 00:03:27.684 EAL: Detected lcore 87 as core 18 on socket 1 00:03:27.684 EAL: Detected lcore 88 as core 19 on socket 1 00:03:27.684 EAL: Detected lcore 89 as core 20 on socket 1 00:03:27.684 EAL: Detected lcore 90 as core 21 on socket 1 00:03:27.684 EAL: Detected lcore 91 as core 25 on socket 1 00:03:27.684 EAL: Detected lcore 92 as core 26 on socket 1 00:03:27.684 EAL: Detected lcore 93 as core 27 on socket 1 00:03:27.684 EAL: Detected lcore 94 as core 28 on socket 1 00:03:27.684 EAL: Detected lcore 95 as core 29 on socket 1 00:03:27.944 EAL: Maximum logical cores by configuration: 128 00:03:27.944 EAL: Detected CPU lcores: 96 00:03:27.944 EAL: Detected NUMA nodes: 2 00:03:27.944 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:27.944 EAL: Detected shared linkage of DPDK 00:03:27.944 EAL: No shared files mode enabled, IPC will be disabled 00:03:27.944 EAL: Bus pci wants IOVA as 'DC' 00:03:27.944 EAL: Buses did not request a specific IOVA mode. 00:03:27.944 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:27.944 EAL: Selected IOVA mode 'VA' 00:03:27.944 EAL: Probing VFIO support... 00:03:27.944 EAL: IOMMU type 1 (Type 1) is supported 00:03:27.944 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:27.944 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:27.944 EAL: VFIO support initialized 00:03:27.944 EAL: Ask a virtual area of 0x2e000 bytes 00:03:27.944 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:27.944 EAL: Setting up physically contiguous memory... 00:03:27.944 EAL: Setting maximum number of open files to 524288 00:03:27.944 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:27.944 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:27.944 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:27.944 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.944 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:27.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.944 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.944 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:27.944 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:27.944 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.944 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:27.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.944 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.944 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:27.944 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:27.945 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:27.945 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.945 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:27.945 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.945 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.945 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:27.945 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:27.945 EAL: Hugepages will be freed exactly as allocated. 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: TSC frequency is ~2100000 KHz 00:03:27.945 EAL: Main lcore 0 is ready (tid=7f4d377f6a00;cpuset=[0]) 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 0 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 2MB 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:27.945 EAL: Mem event callback 'spdk:(nil)' registered 00:03:27.945 00:03:27.945 00:03:27.945 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.945 http://cunit.sourceforge.net/ 00:03:27.945 00:03:27.945 00:03:27.945 Suite: components_suite 00:03:27.945 Test: vtophys_malloc_test ...passed 00:03:27.945 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 4MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 4MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 6MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 6MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 10MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 10MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 18MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 18MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 34MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 34MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.945 EAL: Trying to obtain current memory policy. 00:03:27.945 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.945 EAL: Restoring previous memory policy: 4 00:03:27.945 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.945 EAL: request: mp_malloc_sync 00:03:27.945 EAL: No shared files mode enabled, IPC is disabled 00:03:27.945 EAL: Heap on socket 0 was expanded by 258MB 00:03:28.204 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.204 EAL: request: mp_malloc_sync 00:03:28.204 EAL: No shared files mode enabled, IPC is disabled 00:03:28.204 EAL: Heap on socket 0 was shrunk by 258MB 00:03:28.204 EAL: Trying to obtain current memory policy. 00:03:28.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.204 EAL: Restoring previous memory policy: 4 00:03:28.204 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.204 EAL: request: mp_malloc_sync 00:03:28.204 EAL: No shared files mode enabled, IPC is disabled 00:03:28.204 EAL: Heap on socket 0 was expanded by 514MB 00:03:28.204 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.464 EAL: request: mp_malloc_sync 00:03:28.464 EAL: No shared files mode enabled, IPC is disabled 00:03:28.464 EAL: Heap on socket 0 was shrunk by 514MB 00:03:28.464 EAL: Trying to obtain current memory policy. 00:03:28.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.464 EAL: Restoring previous memory policy: 4 00:03:28.464 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.464 EAL: request: mp_malloc_sync 00:03:28.464 EAL: No shared files mode enabled, IPC is disabled 00:03:28.464 EAL: Heap on socket 0 was expanded by 1026MB 00:03:28.805 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.065 EAL: request: mp_malloc_sync 00:03:29.065 EAL: No shared files mode enabled, IPC is disabled 00:03:29.065 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:29.065 passed 00:03:29.065 00:03:29.065 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.065 suites 1 1 n/a 0 0 00:03:29.065 tests 2 2 2 0 0 00:03:29.065 asserts 497 497 497 0 n/a 00:03:29.065 00:03:29.065 Elapsed time = 0.974 seconds 00:03:29.065 EAL: Calling mem event callback 'spdk:(nil)' 00:03:29.065 EAL: request: mp_malloc_sync 00:03:29.065 EAL: No shared files mode enabled, IPC is disabled 00:03:29.065 EAL: Heap on socket 0 was shrunk by 2MB 00:03:29.065 EAL: No shared files mode enabled, IPC is disabled 00:03:29.065 EAL: No shared files mode enabled, IPC is disabled 00:03:29.065 EAL: No shared files mode enabled, IPC is disabled 00:03:29.065 00:03:29.065 real 0m1.109s 00:03:29.065 user 0m0.648s 00:03:29.065 sys 0m0.432s 00:03:29.065 14:56:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.065 14:56:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:29.065 ************************************ 00:03:29.065 END TEST env_vtophys 00:03:29.065 ************************************ 00:03:29.065 14:56:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.065 14:56:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.065 14:56:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.065 14:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.065 ************************************ 00:03:29.065 START TEST env_pci 00:03:29.065 ************************************ 00:03:29.065 14:56:30 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:29.065 00:03:29.065 00:03:29.065 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.065 http://cunit.sourceforge.net/ 00:03:29.065 00:03:29.065 00:03:29.065 Suite: pci 00:03:29.065 Test: pci_hook ...[2024-12-09 14:56:30.652279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1232655 has claimed it 00:03:29.065 EAL: Cannot find device (10000:00:01.0) 00:03:29.065 EAL: Failed to attach device on primary process 00:03:29.065 passed 00:03:29.065 00:03:29.065 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.065 suites 1 1 n/a 0 0 00:03:29.065 tests 1 1 1 0 0 00:03:29.065 asserts 25 25 25 0 n/a 00:03:29.065 00:03:29.065 Elapsed time = 0.025 seconds 00:03:29.065 00:03:29.065 real 0m0.045s 00:03:29.065 user 0m0.009s 00:03:29.065 sys 0m0.035s 00:03:29.065 14:56:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.065 14:56:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:29.065 ************************************ 00:03:29.065 END TEST env_pci 00:03:29.065 ************************************ 00:03:29.065 14:56:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:29.065 14:56:30 env -- env/env.sh@15 -- # uname 00:03:29.065 14:56:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:29.065 14:56:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:29.065 14:56:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.065 14:56:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:29.065 14:56:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.065 14:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.065 ************************************ 00:03:29.065 START TEST env_dpdk_post_init 00:03:29.065 ************************************ 00:03:29.065 14:56:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:29.065 EAL: Detected CPU lcores: 96 00:03:29.065 EAL: Detected NUMA nodes: 2 00:03:29.065 EAL: Detected shared linkage of DPDK 00:03:29.065 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:29.065 EAL: Selected IOVA mode 'VA' 00:03:29.065 EAL: VFIO support initialized 00:03:29.065 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:29.324 EAL: Using IOMMU type 1 (Type 1) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:29.324 EAL: Ignore mapping IO port bar(1) 00:03:29.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:30.262 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:30.262 EAL: Ignore mapping IO port bar(1) 00:03:30.262 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:33.549 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:33.549 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:33.549 Starting DPDK initialization... 00:03:33.549 Starting SPDK post initialization... 00:03:33.549 SPDK NVMe probe 00:03:33.549 Attaching to 0000:5e:00.0 00:03:33.549 Attached to 0000:5e:00.0 00:03:33.549 Cleaning up... 00:03:33.549 00:03:33.549 real 0m4.398s 00:03:33.549 user 0m3.016s 00:03:33.549 sys 0m0.453s 00:03:33.549 14:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.549 14:56:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.549 ************************************ 00:03:33.549 END TEST env_dpdk_post_init 00:03:33.549 ************************************ 00:03:33.549 14:56:35 env -- env/env.sh@26 -- # uname 00:03:33.549 14:56:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.549 14:56:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.549 14:56:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.549 14:56:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.549 14:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.549 ************************************ 00:03:33.549 START TEST env_mem_callbacks 00:03:33.549 ************************************ 00:03:33.549 14:56:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.549 EAL: Detected CPU lcores: 96 00:03:33.549 EAL: Detected NUMA nodes: 2 00:03:33.549 EAL: Detected shared linkage of DPDK 00:03:33.549 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.549 EAL: Selected IOVA mode 'VA' 00:03:33.549 EAL: VFIO support initialized 00:03:33.549 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.549 00:03:33.549 00:03:33.549 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.549 http://cunit.sourceforge.net/ 00:03:33.549 00:03:33.549 00:03:33.549 Suite: memory 00:03:33.549 Test: test ... 00:03:33.549 register 0x200000200000 2097152 00:03:33.549 malloc 3145728 00:03:33.549 register 0x200000400000 4194304 00:03:33.549 buf 0x200000500000 len 3145728 PASSED 00:03:33.549 malloc 64 00:03:33.549 buf 0x2000004fff40 len 64 PASSED 00:03:33.549 malloc 4194304 00:03:33.549 register 0x200000800000 6291456 00:03:33.549 buf 0x200000a00000 len 4194304 PASSED 00:03:33.549 free 0x200000500000 3145728 00:03:33.549 free 0x2000004fff40 64 00:03:33.549 unregister 0x200000400000 4194304 PASSED 00:03:33.549 free 0x200000a00000 4194304 00:03:33.549 unregister 0x200000800000 6291456 PASSED 00:03:33.549 malloc 8388608 00:03:33.549 register 0x200000400000 10485760 00:03:33.549 buf 0x200000600000 len 8388608 PASSED 00:03:33.549 free 0x200000600000 8388608 00:03:33.549 unregister 0x200000400000 10485760 PASSED 00:03:33.549 passed 00:03:33.549 00:03:33.549 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.549 suites 1 1 n/a 0 0 00:03:33.549 tests 1 1 1 0 0 00:03:33.549 asserts 15 15 15 0 n/a 00:03:33.549 00:03:33.549 Elapsed time = 0.008 seconds 00:03:33.549 00:03:33.549 real 0m0.059s 00:03:33.549 user 0m0.017s 00:03:33.549 sys 0m0.042s 00:03:33.549 14:56:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.549 14:56:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:33.549 ************************************ 00:03:33.549 END TEST env_mem_callbacks 00:03:33.549 ************************************ 00:03:33.549 00:03:33.549 real 0m6.291s 00:03:33.549 user 0m4.062s 00:03:33.549 sys 0m1.309s 00:03:33.549 14:56:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.549 14:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.549 ************************************ 00:03:33.549 END TEST env 00:03:33.549 ************************************ 00:03:33.808 14:56:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.808 14:56:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.808 14:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.808 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:03:33.808 ************************************ 00:03:33.808 START TEST rpc 00:03:33.808 ************************************ 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.808 * Looking for test storage... 00:03:33.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.808 14:56:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.808 14:56:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.808 14:56:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.808 14:56:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.808 14:56:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.808 14:56:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.808 14:56:35 rpc -- scripts/common.sh@345 -- # : 1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.808 14:56:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.808 14:56:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.808 14:56:35 rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.808 14:56:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.808 14:56:35 rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.808 14:56:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.808 14:56:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.808 14:56:35 rpc -- scripts/common.sh@368 -- # return 0 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.808 --rc genhtml_branch_coverage=1 00:03:33.808 --rc genhtml_function_coverage=1 00:03:33.808 --rc genhtml_legend=1 00:03:33.808 --rc geninfo_all_blocks=1 00:03:33.808 --rc geninfo_unexecuted_blocks=1 00:03:33.808 00:03:33.808 ' 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.808 --rc genhtml_branch_coverage=1 00:03:33.808 --rc genhtml_function_coverage=1 00:03:33.808 --rc genhtml_legend=1 00:03:33.808 --rc geninfo_all_blocks=1 00:03:33.808 --rc geninfo_unexecuted_blocks=1 00:03:33.808 00:03:33.808 ' 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.808 --rc genhtml_branch_coverage=1 00:03:33.808 --rc genhtml_function_coverage=1 00:03:33.808 --rc genhtml_legend=1 00:03:33.808 --rc geninfo_all_blocks=1 00:03:33.808 --rc geninfo_unexecuted_blocks=1 00:03:33.808 00:03:33.808 ' 00:03:33.808 14:56:35 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.808 --rc genhtml_branch_coverage=1 00:03:33.808 --rc genhtml_function_coverage=1 00:03:33.808 --rc genhtml_legend=1 00:03:33.808 --rc geninfo_all_blocks=1 00:03:33.808 --rc geninfo_unexecuted_blocks=1 00:03:33.808 00:03:33.808 ' 00:03:33.808 14:56:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1233652 00:03:33.809 14:56:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:33.809 14:56:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.809 14:56:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1233652 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 1233652 ']' 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.809 14:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.068 [2024-12-09 14:56:35.623431] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:34.068 [2024-12-09 14:56:35.623476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233652 ] 00:03:34.068 [2024-12-09 14:56:35.698492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.068 [2024-12-09 14:56:35.738412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:34.068 [2024-12-09 14:56:35.738448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1233652' to capture a snapshot of events at runtime. 00:03:34.068 [2024-12-09 14:56:35.738456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:34.068 [2024-12-09 14:56:35.738462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:34.068 [2024-12-09 14:56:35.738467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1233652 for offline analysis/debug. 00:03:34.068 [2024-12-09 14:56:35.738987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.327 14:56:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.327 14:56:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:34.327 14:56:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.327 14:56:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:34.327 14:56:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:34.327 14:56:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:34.327 14:56:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.327 14:56:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.327 14:56:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 ************************************ 00:03:34.327 START TEST rpc_integrity 00:03:34.327 ************************************ 00:03:34.327 14:56:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:34.327 14:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 14:56:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 14:56:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 14:56:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 14:56:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.327 14:56:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.327 { 00:03:34.327 "name": "Malloc0", 00:03:34.327 "aliases": [ 00:03:34.327 "59e13312-af02-4417-9f95-53b5d49f83e2" 00:03:34.327 ], 00:03:34.327 "product_name": "Malloc disk", 00:03:34.327 "block_size": 512, 00:03:34.327 "num_blocks": 16384, 00:03:34.327 "uuid": "59e13312-af02-4417-9f95-53b5d49f83e2", 00:03:34.327 "assigned_rate_limits": { 00:03:34.327 "rw_ios_per_sec": 0, 00:03:34.327 "rw_mbytes_per_sec": 0, 00:03:34.327 "r_mbytes_per_sec": 0, 00:03:34.327 "w_mbytes_per_sec": 0 00:03:34.327 }, 00:03:34.327 "claimed": false, 00:03:34.327 "zoned": false, 00:03:34.327 "supported_io_types": { 00:03:34.327 "read": true, 00:03:34.327 "write": true, 00:03:34.327 "unmap": true, 00:03:34.327 "flush": true, 00:03:34.327 "reset": true, 00:03:34.327 "nvme_admin": false, 00:03:34.327 "nvme_io": false, 00:03:34.327 "nvme_io_md": false, 00:03:34.327 "write_zeroes": true, 00:03:34.327 "zcopy": true, 00:03:34.327 "get_zone_info": false, 00:03:34.327 "zone_management": false, 00:03:34.327 "zone_append": false, 00:03:34.327 "compare": false, 00:03:34.327 "compare_and_write": false, 00:03:34.327 "abort": true, 00:03:34.327 "seek_hole": false, 00:03:34.327 "seek_data": false, 00:03:34.327 "copy": true, 00:03:34.327 "nvme_iov_md": false 00:03:34.327 }, 00:03:34.327 "memory_domains": [ 00:03:34.327 { 00:03:34.327 "dma_device_id": "system", 00:03:34.327 "dma_device_type": 1 00:03:34.327 }, 00:03:34.327 { 00:03:34.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.327 "dma_device_type": 2 00:03:34.327 } 00:03:34.327 ], 00:03:34.327 "driver_specific": {} 00:03:34.327 } 00:03:34.327 ]' 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.327 [2024-12-09 14:56:36.111257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:34.327 [2024-12-09 14:56:36.111286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.327 [2024-12-09 14:56:36.111298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2417a40 00:03:34.327 [2024-12-09 14:56:36.111304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.327 [2024-12-09 14:56:36.112365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.327 [2024-12-09 14:56:36.112387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.327 Passthru0 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.327 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.327 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.587 { 00:03:34.587 "name": "Malloc0", 00:03:34.587 "aliases": [ 00:03:34.587 "59e13312-af02-4417-9f95-53b5d49f83e2" 00:03:34.587 ], 00:03:34.587 "product_name": "Malloc disk", 00:03:34.587 "block_size": 512, 00:03:34.587 "num_blocks": 16384, 00:03:34.587 "uuid": "59e13312-af02-4417-9f95-53b5d49f83e2", 00:03:34.587 "assigned_rate_limits": { 00:03:34.587 "rw_ios_per_sec": 0, 00:03:34.587 "rw_mbytes_per_sec": 0, 00:03:34.587 "r_mbytes_per_sec": 0, 00:03:34.587 "w_mbytes_per_sec": 0 00:03:34.587 }, 00:03:34.587 "claimed": true, 00:03:34.587 "claim_type": "exclusive_write", 00:03:34.587 "zoned": false, 00:03:34.587 "supported_io_types": { 00:03:34.587 "read": true, 00:03:34.587 "write": true, 00:03:34.587 "unmap": true, 00:03:34.587 "flush": true, 00:03:34.587 "reset": true, 00:03:34.587 "nvme_admin": false, 00:03:34.587 "nvme_io": false, 00:03:34.587 "nvme_io_md": false, 00:03:34.587 "write_zeroes": true, 00:03:34.587 "zcopy": true, 00:03:34.587 "get_zone_info": false, 00:03:34.587 "zone_management": false, 00:03:34.587 "zone_append": false, 00:03:34.587 "compare": false, 00:03:34.587 "compare_and_write": false, 00:03:34.587 "abort": true, 00:03:34.587 "seek_hole": false, 00:03:34.587 "seek_data": false, 00:03:34.587 "copy": true, 00:03:34.587 "nvme_iov_md": false 00:03:34.587 }, 00:03:34.587 "memory_domains": [ 00:03:34.587 { 00:03:34.587 "dma_device_id": "system", 00:03:34.587 "dma_device_type": 1 00:03:34.587 }, 00:03:34.587 { 00:03:34.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.587 "dma_device_type": 2 00:03:34.587 } 00:03:34.587 ], 00:03:34.587 "driver_specific": {} 00:03:34.587 }, 00:03:34.587 { 00:03:34.587 "name": "Passthru0", 00:03:34.587 "aliases": [ 00:03:34.587 "b4e7f6b7-135d-5610-9785-10876b7e2a94" 00:03:34.587 ], 00:03:34.587 "product_name": "passthru", 00:03:34.587 "block_size": 512, 00:03:34.587 "num_blocks": 16384, 00:03:34.587 "uuid": "b4e7f6b7-135d-5610-9785-10876b7e2a94", 00:03:34.587 "assigned_rate_limits": { 00:03:34.587 "rw_ios_per_sec": 0, 00:03:34.587 "rw_mbytes_per_sec": 0, 00:03:34.587 "r_mbytes_per_sec": 0, 00:03:34.587 "w_mbytes_per_sec": 0 00:03:34.587 }, 00:03:34.587 "claimed": false, 00:03:34.587 "zoned": false, 00:03:34.587 "supported_io_types": { 00:03:34.587 "read": true, 00:03:34.587 "write": true, 00:03:34.587 "unmap": true, 00:03:34.587 "flush": true, 00:03:34.587 "reset": true, 00:03:34.587 "nvme_admin": false, 00:03:34.587 "nvme_io": false, 00:03:34.587 "nvme_io_md": false, 00:03:34.587 "write_zeroes": true, 00:03:34.587 "zcopy": true, 00:03:34.587 "get_zone_info": false, 00:03:34.587 "zone_management": false, 00:03:34.587 "zone_append": false, 00:03:34.587 "compare": false, 00:03:34.587 "compare_and_write": false, 00:03:34.587 "abort": true, 00:03:34.587 "seek_hole": false, 00:03:34.587 "seek_data": false, 00:03:34.587 "copy": true, 00:03:34.587 "nvme_iov_md": false 00:03:34.587 }, 00:03:34.587 "memory_domains": [ 00:03:34.587 { 00:03:34.587 "dma_device_id": "system", 00:03:34.587 "dma_device_type": 1 00:03:34.587 }, 00:03:34.587 { 00:03:34.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.587 "dma_device_type": 2 00:03:34.587 } 00:03:34.587 ], 00:03:34.587 "driver_specific": { 00:03:34.587 "passthru": { 00:03:34.587 "name": "Passthru0", 00:03:34.587 "base_bdev_name": "Malloc0" 00:03:34.587 } 00:03:34.587 } 00:03:34.587 } 00:03:34.587 ]' 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.587 14:56:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.587 00:03:34.587 real 0m0.272s 00:03:34.587 user 0m0.172s 00:03:34.587 sys 0m0.038s 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.587 14:56:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.587 ************************************ 00:03:34.587 END TEST rpc_integrity 00:03:34.587 ************************************ 00:03:34.587 14:56:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.587 14:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.587 14:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.588 14:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.588 ************************************ 00:03:34.588 START TEST rpc_plugins 00:03:34.588 ************************************ 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:34.588 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.588 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.588 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.588 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.588 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.588 { 00:03:34.588 "name": "Malloc1", 00:03:34.588 "aliases": [ 00:03:34.588 "e92a1bed-7238-4090-8182-d3a9f5a56fbb" 00:03:34.588 ], 00:03:34.588 "product_name": "Malloc disk", 00:03:34.588 "block_size": 4096, 00:03:34.588 "num_blocks": 256, 00:03:34.588 "uuid": "e92a1bed-7238-4090-8182-d3a9f5a56fbb", 00:03:34.588 "assigned_rate_limits": { 00:03:34.588 "rw_ios_per_sec": 0, 00:03:34.588 "rw_mbytes_per_sec": 0, 00:03:34.588 "r_mbytes_per_sec": 0, 00:03:34.588 "w_mbytes_per_sec": 0 00:03:34.588 }, 00:03:34.588 "claimed": false, 00:03:34.588 "zoned": false, 00:03:34.588 "supported_io_types": { 00:03:34.588 "read": true, 00:03:34.588 "write": true, 00:03:34.588 "unmap": true, 00:03:34.588 "flush": true, 00:03:34.588 "reset": true, 00:03:34.588 "nvme_admin": false, 00:03:34.588 "nvme_io": false, 00:03:34.588 "nvme_io_md": false, 00:03:34.588 "write_zeroes": true, 00:03:34.588 "zcopy": true, 00:03:34.588 "get_zone_info": false, 00:03:34.588 "zone_management": false, 00:03:34.588 "zone_append": false, 00:03:34.588 "compare": false, 00:03:34.588 "compare_and_write": false, 00:03:34.588 "abort": true, 00:03:34.588 "seek_hole": false, 00:03:34.588 "seek_data": false, 00:03:34.588 "copy": true, 00:03:34.588 "nvme_iov_md": false 00:03:34.588 }, 00:03:34.588 "memory_domains": [ 00:03:34.588 { 00:03:34.588 "dma_device_id": "system", 00:03:34.588 "dma_device_type": 1 00:03:34.588 }, 00:03:34.588 { 00:03:34.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.588 "dma_device_type": 2 00:03:34.588 } 00:03:34.588 ], 00:03:34.588 "driver_specific": {} 00:03:34.588 } 00:03:34.588 ]' 00:03:34.588 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.846 14:56:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.846 00:03:34.846 real 0m0.145s 00:03:34.846 user 0m0.086s 00:03:34.846 sys 0m0.021s 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.846 14:56:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.846 ************************************ 00:03:34.846 END TEST rpc_plugins 00:03:34.846 ************************************ 00:03:34.846 14:56:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.846 14:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.846 14:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.846 14:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.846 ************************************ 00:03:34.846 START TEST rpc_trace_cmd_test 00:03:34.846 ************************************ 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.846 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.846 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1233652", 00:03:34.846 "tpoint_group_mask": "0x8", 00:03:34.846 "iscsi_conn": { 00:03:34.846 "mask": "0x2", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "scsi": { 00:03:34.846 "mask": "0x4", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "bdev": { 00:03:34.846 "mask": "0x8", 00:03:34.846 "tpoint_mask": "0xffffffffffffffff" 00:03:34.846 }, 00:03:34.846 "nvmf_rdma": { 00:03:34.846 "mask": "0x10", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "nvmf_tcp": { 00:03:34.846 "mask": "0x20", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "ftl": { 00:03:34.846 "mask": "0x40", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "blobfs": { 00:03:34.846 "mask": "0x80", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.846 "dsa": { 00:03:34.846 "mask": "0x200", 00:03:34.846 "tpoint_mask": "0x0" 00:03:34.846 }, 00:03:34.847 "thread": { 00:03:34.847 "mask": "0x400", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "nvme_pcie": { 00:03:34.847 "mask": "0x800", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "iaa": { 00:03:34.847 "mask": "0x1000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "nvme_tcp": { 00:03:34.847 "mask": "0x2000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "bdev_nvme": { 00:03:34.847 "mask": "0x4000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "sock": { 00:03:34.847 "mask": "0x8000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "blob": { 00:03:34.847 "mask": "0x10000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "bdev_raid": { 00:03:34.847 "mask": "0x20000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 }, 00:03:34.847 "scheduler": { 00:03:34.847 "mask": "0x40000", 00:03:34.847 "tpoint_mask": "0x0" 00:03:34.847 } 00:03:34.847 }' 00:03:34.847 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.847 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:34.847 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.847 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.847 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:35.105 00:03:35.105 real 0m0.223s 00:03:35.105 user 0m0.185s 00:03:35.105 sys 0m0.029s 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.105 14:56:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:35.106 ************************************ 00:03:35.106 END TEST rpc_trace_cmd_test 00:03:35.106 ************************************ 00:03:35.106 14:56:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:35.106 14:56:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:35.106 14:56:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:35.106 14:56:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.106 14:56:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.106 14:56:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.106 ************************************ 00:03:35.106 START TEST rpc_daemon_integrity 00:03:35.106 ************************************ 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.106 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:35.365 { 00:03:35.365 "name": "Malloc2", 00:03:35.365 "aliases": [ 00:03:35.365 "3a9c580d-bbc4-4958-8e49-3fb5fdf78087" 00:03:35.365 ], 00:03:35.365 "product_name": "Malloc disk", 00:03:35.365 "block_size": 512, 00:03:35.365 "num_blocks": 16384, 00:03:35.365 "uuid": "3a9c580d-bbc4-4958-8e49-3fb5fdf78087", 00:03:35.365 "assigned_rate_limits": { 00:03:35.365 "rw_ios_per_sec": 0, 00:03:35.365 "rw_mbytes_per_sec": 0, 00:03:35.365 "r_mbytes_per_sec": 0, 00:03:35.365 "w_mbytes_per_sec": 0 00:03:35.365 }, 00:03:35.365 "claimed": false, 00:03:35.365 "zoned": false, 00:03:35.365 "supported_io_types": { 00:03:35.365 "read": true, 00:03:35.365 "write": true, 00:03:35.365 "unmap": true, 00:03:35.365 "flush": true, 00:03:35.365 "reset": true, 00:03:35.365 "nvme_admin": false, 00:03:35.365 "nvme_io": false, 00:03:35.365 "nvme_io_md": false, 00:03:35.365 "write_zeroes": true, 00:03:35.365 "zcopy": true, 00:03:35.365 "get_zone_info": false, 00:03:35.365 "zone_management": false, 00:03:35.365 "zone_append": false, 00:03:35.365 "compare": false, 00:03:35.365 "compare_and_write": false, 00:03:35.365 "abort": true, 00:03:35.365 "seek_hole": false, 00:03:35.365 "seek_data": false, 00:03:35.365 "copy": true, 00:03:35.365 "nvme_iov_md": false 00:03:35.365 }, 00:03:35.365 "memory_domains": [ 00:03:35.365 { 00:03:35.365 "dma_device_id": "system", 00:03:35.365 "dma_device_type": 1 00:03:35.365 }, 00:03:35.365 { 00:03:35.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.365 "dma_device_type": 2 00:03:35.365 } 00:03:35.365 ], 00:03:35.365 "driver_specific": {} 00:03:35.365 } 00:03:35.365 ]' 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.365 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.365 [2024-12-09 14:56:36.949519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:35.366 [2024-12-09 14:56:36.949546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:35.366 [2024-12-09 14:56:36.949556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e52e0 00:03:35.366 [2024-12-09 14:56:36.949562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:35.366 [2024-12-09 14:56:36.950519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:35.366 [2024-12-09 14:56:36.950541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:35.366 Passthru0 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.366 { 00:03:35.366 "name": "Malloc2", 00:03:35.366 "aliases": [ 00:03:35.366 "3a9c580d-bbc4-4958-8e49-3fb5fdf78087" 00:03:35.366 ], 00:03:35.366 "product_name": "Malloc disk", 00:03:35.366 "block_size": 512, 00:03:35.366 "num_blocks": 16384, 00:03:35.366 "uuid": "3a9c580d-bbc4-4958-8e49-3fb5fdf78087", 00:03:35.366 "assigned_rate_limits": { 00:03:35.366 "rw_ios_per_sec": 0, 00:03:35.366 "rw_mbytes_per_sec": 0, 00:03:35.366 "r_mbytes_per_sec": 0, 00:03:35.366 "w_mbytes_per_sec": 0 00:03:35.366 }, 00:03:35.366 "claimed": true, 00:03:35.366 "claim_type": "exclusive_write", 00:03:35.366 "zoned": false, 00:03:35.366 "supported_io_types": { 00:03:35.366 "read": true, 00:03:35.366 "write": true, 00:03:35.366 "unmap": true, 00:03:35.366 "flush": true, 00:03:35.366 "reset": true, 00:03:35.366 "nvme_admin": false, 00:03:35.366 "nvme_io": false, 00:03:35.366 "nvme_io_md": false, 00:03:35.366 "write_zeroes": true, 00:03:35.366 "zcopy": true, 00:03:35.366 "get_zone_info": false, 00:03:35.366 "zone_management": false, 00:03:35.366 "zone_append": false, 00:03:35.366 "compare": false, 00:03:35.366 "compare_and_write": false, 00:03:35.366 "abort": true, 00:03:35.366 "seek_hole": false, 00:03:35.366 "seek_data": false, 00:03:35.366 "copy": true, 00:03:35.366 "nvme_iov_md": false 00:03:35.366 }, 00:03:35.366 "memory_domains": [ 00:03:35.366 { 00:03:35.366 "dma_device_id": "system", 00:03:35.366 "dma_device_type": 1 00:03:35.366 }, 00:03:35.366 { 00:03:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.366 "dma_device_type": 2 00:03:35.366 } 00:03:35.366 ], 00:03:35.366 "driver_specific": {} 00:03:35.366 }, 00:03:35.366 { 00:03:35.366 "name": "Passthru0", 00:03:35.366 "aliases": [ 00:03:35.366 "84eee21f-75e8-5255-9c6e-b4397ecea98f" 00:03:35.366 ], 00:03:35.366 "product_name": "passthru", 00:03:35.366 "block_size": 512, 00:03:35.366 "num_blocks": 16384, 00:03:35.366 "uuid": "84eee21f-75e8-5255-9c6e-b4397ecea98f", 00:03:35.366 "assigned_rate_limits": { 00:03:35.366 "rw_ios_per_sec": 0, 00:03:35.366 "rw_mbytes_per_sec": 0, 00:03:35.366 "r_mbytes_per_sec": 0, 00:03:35.366 "w_mbytes_per_sec": 0 00:03:35.366 }, 00:03:35.366 "claimed": false, 00:03:35.366 "zoned": false, 00:03:35.366 "supported_io_types": { 00:03:35.366 "read": true, 00:03:35.366 "write": true, 00:03:35.366 "unmap": true, 00:03:35.366 "flush": true, 00:03:35.366 "reset": true, 00:03:35.366 "nvme_admin": false, 00:03:35.366 "nvme_io": false, 00:03:35.366 "nvme_io_md": false, 00:03:35.366 "write_zeroes": true, 00:03:35.366 "zcopy": true, 00:03:35.366 "get_zone_info": false, 00:03:35.366 "zone_management": false, 00:03:35.366 "zone_append": false, 00:03:35.366 "compare": false, 00:03:35.366 "compare_and_write": false, 00:03:35.366 "abort": true, 00:03:35.366 "seek_hole": false, 00:03:35.366 "seek_data": false, 00:03:35.366 "copy": true, 00:03:35.366 "nvme_iov_md": false 00:03:35.366 }, 00:03:35.366 "memory_domains": [ 00:03:35.366 { 00:03:35.366 "dma_device_id": "system", 00:03:35.366 "dma_device_type": 1 00:03:35.366 }, 00:03:35.366 { 00:03:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.366 "dma_device_type": 2 00:03:35.366 } 00:03:35.366 ], 00:03:35.366 "driver_specific": { 00:03:35.366 "passthru": { 00:03:35.366 "name": "Passthru0", 00:03:35.366 "base_bdev_name": "Malloc2" 00:03:35.366 } 00:03:35.366 } 00:03:35.366 } 00:03:35.366 ]' 00:03:35.366 14:56:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.366 00:03:35.366 real 0m0.275s 00:03:35.366 user 0m0.179s 00:03:35.366 sys 0m0.035s 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.366 14:56:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:35.366 ************************************ 00:03:35.366 END TEST rpc_daemon_integrity 00:03:35.366 ************************************ 00:03:35.366 14:56:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:35.367 14:56:37 rpc -- rpc/rpc.sh@84 -- # killprocess 1233652 00:03:35.367 14:56:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 1233652 ']' 00:03:35.367 14:56:37 rpc -- common/autotest_common.sh@958 -- # kill -0 1233652 00:03:35.367 14:56:37 rpc -- common/autotest_common.sh@959 -- # uname 00:03:35.367 14:56:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.367 14:56:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233652 00:03:35.625 14:56:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.625 14:56:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.625 14:56:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233652' 00:03:35.625 killing process with pid 1233652 00:03:35.625 14:56:37 rpc -- common/autotest_common.sh@973 -- # kill 1233652 00:03:35.625 14:56:37 rpc -- common/autotest_common.sh@978 -- # wait 1233652 00:03:35.884 00:03:35.884 real 0m2.078s 00:03:35.884 user 0m2.642s 00:03:35.884 sys 0m0.709s 00:03:35.884 14:56:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.884 14:56:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.884 ************************************ 00:03:35.884 END TEST rpc 00:03:35.884 ************************************ 00:03:35.884 14:56:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.884 14:56:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.884 14:56:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.884 14:56:37 -- common/autotest_common.sh@10 -- # set +x 00:03:35.884 ************************************ 00:03:35.884 START TEST skip_rpc 00:03:35.884 ************************************ 00:03:35.884 14:56:37 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.884 * Looking for test storage... 00:03:35.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.884 14:56:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:35.884 14:56:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:35.884 14:56:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:36.143 14:56:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:36.143 14:56:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.144 14:56:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.144 --rc genhtml_branch_coverage=1 00:03:36.144 --rc genhtml_function_coverage=1 00:03:36.144 --rc genhtml_legend=1 00:03:36.144 --rc geninfo_all_blocks=1 00:03:36.144 --rc geninfo_unexecuted_blocks=1 00:03:36.144 00:03:36.144 ' 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.144 --rc genhtml_branch_coverage=1 00:03:36.144 --rc genhtml_function_coverage=1 00:03:36.144 --rc genhtml_legend=1 00:03:36.144 --rc geninfo_all_blocks=1 00:03:36.144 --rc geninfo_unexecuted_blocks=1 00:03:36.144 00:03:36.144 ' 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.144 --rc genhtml_branch_coverage=1 00:03:36.144 --rc genhtml_function_coverage=1 00:03:36.144 --rc genhtml_legend=1 00:03:36.144 --rc geninfo_all_blocks=1 00:03:36.144 --rc geninfo_unexecuted_blocks=1 00:03:36.144 00:03:36.144 ' 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:36.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.144 --rc genhtml_branch_coverage=1 00:03:36.144 --rc genhtml_function_coverage=1 00:03:36.144 --rc genhtml_legend=1 00:03:36.144 --rc geninfo_all_blocks=1 00:03:36.144 --rc geninfo_unexecuted_blocks=1 00:03:36.144 00:03:36.144 ' 00:03:36.144 14:56:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.144 14:56:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:36.144 14:56:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.144 14:56:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.144 ************************************ 00:03:36.144 START TEST skip_rpc 00:03:36.144 ************************************ 00:03:36.144 14:56:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:36.144 14:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1234281 00:03:36.144 14:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.144 14:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:36.144 14:56:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:36.144 [2024-12-09 14:56:37.813099] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:36.144 [2024-12-09 14:56:37.813138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234281 ] 00:03:36.144 [2024-12-09 14:56:37.885421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.144 [2024-12-09 14:56:37.923316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1234281 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1234281 ']' 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1234281 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1234281 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1234281' 00:03:41.417 killing process with pid 1234281 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1234281 00:03:41.417 14:56:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1234281 00:03:41.417 00:03:41.417 real 0m5.367s 00:03:41.417 user 0m5.118s 00:03:41.417 sys 0m0.290s 00:03:41.417 14:56:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.417 14:56:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.417 ************************************ 00:03:41.417 END TEST skip_rpc 00:03:41.417 ************************************ 00:03:41.417 14:56:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:41.417 14:56:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.417 14:56:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.417 14:56:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.417 ************************************ 00:03:41.417 START TEST skip_rpc_with_json 00:03:41.417 ************************************ 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1235217 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1235217 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1235217 ']' 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:41.417 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.676 [2024-12-09 14:56:43.248124] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:41.676 [2024-12-09 14:56:43.248163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235217 ] 00:03:41.676 [2024-12-09 14:56:43.322379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.676 [2024-12-09 14:56:43.359872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.935 [2024-12-09 14:56:43.584125] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.935 request: 00:03:41.935 { 00:03:41.935 "trtype": "tcp", 00:03:41.935 "method": "nvmf_get_transports", 00:03:41.935 "req_id": 1 00:03:41.935 } 00:03:41.935 Got JSON-RPC error response 00:03:41.935 response: 00:03:41.935 { 00:03:41.935 "code": -19, 00:03:41.935 "message": "No such device" 00:03:41.935 } 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.935 [2024-12-09 14:56:43.596234] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.935 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.194 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.194 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.194 { 00:03:42.194 "subsystems": [ 00:03:42.194 { 00:03:42.194 "subsystem": "fsdev", 00:03:42.194 "config": [ 00:03:42.194 { 00:03:42.194 "method": "fsdev_set_opts", 00:03:42.194 "params": { 00:03:42.194 "fsdev_io_pool_size": 65535, 00:03:42.194 "fsdev_io_cache_size": 256 00:03:42.194 } 00:03:42.194 } 00:03:42.194 ] 00:03:42.194 }, 00:03:42.194 { 00:03:42.194 "subsystem": "vfio_user_target", 00:03:42.194 "config": null 00:03:42.194 }, 00:03:42.194 { 00:03:42.194 "subsystem": "keyring", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "iobuf", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "iobuf_set_options", 00:03:42.195 "params": { 00:03:42.195 "small_pool_count": 8192, 00:03:42.195 "large_pool_count": 1024, 00:03:42.195 "small_bufsize": 8192, 00:03:42.195 "large_bufsize": 135168, 00:03:42.195 "enable_numa": false 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "sock", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "sock_set_default_impl", 00:03:42.195 "params": { 00:03:42.195 "impl_name": "posix" 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "sock_impl_set_options", 00:03:42.195 "params": { 00:03:42.195 "impl_name": "ssl", 00:03:42.195 "recv_buf_size": 4096, 00:03:42.195 "send_buf_size": 4096, 00:03:42.195 "enable_recv_pipe": true, 00:03:42.195 "enable_quickack": false, 00:03:42.195 "enable_placement_id": 0, 00:03:42.195 "enable_zerocopy_send_server": true, 00:03:42.195 "enable_zerocopy_send_client": false, 00:03:42.195 "zerocopy_threshold": 0, 00:03:42.195 "tls_version": 0, 00:03:42.195 "enable_ktls": false 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "sock_impl_set_options", 00:03:42.195 "params": { 00:03:42.195 "impl_name": "posix", 00:03:42.195 "recv_buf_size": 2097152, 00:03:42.195 "send_buf_size": 2097152, 00:03:42.195 "enable_recv_pipe": true, 00:03:42.195 "enable_quickack": false, 00:03:42.195 "enable_placement_id": 0, 00:03:42.195 "enable_zerocopy_send_server": true, 00:03:42.195 "enable_zerocopy_send_client": false, 00:03:42.195 "zerocopy_threshold": 0, 00:03:42.195 "tls_version": 0, 00:03:42.195 "enable_ktls": false 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "vmd", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "accel", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "accel_set_options", 00:03:42.195 "params": { 00:03:42.195 "small_cache_size": 128, 00:03:42.195 "large_cache_size": 16, 00:03:42.195 "task_count": 2048, 00:03:42.195 "sequence_count": 2048, 00:03:42.195 "buf_count": 2048 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "bdev", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "bdev_set_options", 00:03:42.195 "params": { 00:03:42.195 "bdev_io_pool_size": 65535, 00:03:42.195 "bdev_io_cache_size": 256, 00:03:42.195 "bdev_auto_examine": true, 00:03:42.195 "iobuf_small_cache_size": 128, 00:03:42.195 "iobuf_large_cache_size": 16 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "bdev_raid_set_options", 00:03:42.195 "params": { 00:03:42.195 "process_window_size_kb": 1024, 00:03:42.195 "process_max_bandwidth_mb_sec": 0 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "bdev_iscsi_set_options", 00:03:42.195 "params": { 00:03:42.195 "timeout_sec": 30 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "bdev_nvme_set_options", 00:03:42.195 "params": { 00:03:42.195 "action_on_timeout": "none", 00:03:42.195 "timeout_us": 0, 00:03:42.195 "timeout_admin_us": 0, 00:03:42.195 "keep_alive_timeout_ms": 10000, 00:03:42.195 "arbitration_burst": 0, 00:03:42.195 "low_priority_weight": 0, 00:03:42.195 "medium_priority_weight": 0, 00:03:42.195 "high_priority_weight": 0, 00:03:42.195 "nvme_adminq_poll_period_us": 10000, 00:03:42.195 "nvme_ioq_poll_period_us": 0, 00:03:42.195 "io_queue_requests": 0, 00:03:42.195 "delay_cmd_submit": true, 00:03:42.195 "transport_retry_count": 4, 00:03:42.195 "bdev_retry_count": 3, 00:03:42.195 "transport_ack_timeout": 0, 00:03:42.195 "ctrlr_loss_timeout_sec": 0, 00:03:42.195 "reconnect_delay_sec": 0, 00:03:42.195 "fast_io_fail_timeout_sec": 0, 00:03:42.195 "disable_auto_failback": false, 00:03:42.195 "generate_uuids": false, 00:03:42.195 "transport_tos": 0, 00:03:42.195 "nvme_error_stat": false, 00:03:42.195 "rdma_srq_size": 0, 00:03:42.195 "io_path_stat": false, 00:03:42.195 "allow_accel_sequence": false, 00:03:42.195 "rdma_max_cq_size": 0, 00:03:42.195 "rdma_cm_event_timeout_ms": 0, 00:03:42.195 "dhchap_digests": [ 00:03:42.195 "sha256", 00:03:42.195 "sha384", 00:03:42.195 "sha512" 00:03:42.195 ], 00:03:42.195 "dhchap_dhgroups": [ 00:03:42.195 "null", 00:03:42.195 "ffdhe2048", 00:03:42.195 "ffdhe3072", 00:03:42.195 "ffdhe4096", 00:03:42.195 "ffdhe6144", 00:03:42.195 "ffdhe8192" 00:03:42.195 ] 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "bdev_nvme_set_hotplug", 00:03:42.195 "params": { 00:03:42.195 "period_us": 100000, 00:03:42.195 "enable": false 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "bdev_wait_for_examine" 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "scsi", 00:03:42.195 "config": null 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "scheduler", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "framework_set_scheduler", 00:03:42.195 "params": { 00:03:42.195 "name": "static" 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "vhost_scsi", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "vhost_blk", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "ublk", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "nbd", 00:03:42.195 "config": [] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "nvmf", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "nvmf_set_config", 00:03:42.195 "params": { 00:03:42.195 "discovery_filter": "match_any", 00:03:42.195 "admin_cmd_passthru": { 00:03:42.195 "identify_ctrlr": false 00:03:42.195 }, 00:03:42.195 "dhchap_digests": [ 00:03:42.195 "sha256", 00:03:42.195 "sha384", 00:03:42.195 "sha512" 00:03:42.195 ], 00:03:42.195 "dhchap_dhgroups": [ 00:03:42.195 "null", 00:03:42.195 "ffdhe2048", 00:03:42.195 "ffdhe3072", 00:03:42.195 "ffdhe4096", 00:03:42.195 "ffdhe6144", 00:03:42.195 "ffdhe8192" 00:03:42.195 ] 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "nvmf_set_max_subsystems", 00:03:42.195 "params": { 00:03:42.195 "max_subsystems": 1024 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "nvmf_set_crdt", 00:03:42.195 "params": { 00:03:42.195 "crdt1": 0, 00:03:42.195 "crdt2": 0, 00:03:42.195 "crdt3": 0 00:03:42.195 } 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "method": "nvmf_create_transport", 00:03:42.195 "params": { 00:03:42.195 "trtype": "TCP", 00:03:42.195 "max_queue_depth": 128, 00:03:42.195 "max_io_qpairs_per_ctrlr": 127, 00:03:42.195 "in_capsule_data_size": 4096, 00:03:42.195 "max_io_size": 131072, 00:03:42.195 "io_unit_size": 131072, 00:03:42.195 "max_aq_depth": 128, 00:03:42.195 "num_shared_buffers": 511, 00:03:42.195 "buf_cache_size": 4294967295, 00:03:42.195 "dif_insert_or_strip": false, 00:03:42.195 "zcopy": false, 00:03:42.195 "c2h_success": true, 00:03:42.195 "sock_priority": 0, 00:03:42.195 "abort_timeout_sec": 1, 00:03:42.195 "ack_timeout": 0, 00:03:42.195 "data_wr_pool_size": 0 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 }, 00:03:42.195 { 00:03:42.195 "subsystem": "iscsi", 00:03:42.195 "config": [ 00:03:42.195 { 00:03:42.195 "method": "iscsi_set_options", 00:03:42.195 "params": { 00:03:42.195 "node_base": "iqn.2016-06.io.spdk", 00:03:42.195 "max_sessions": 128, 00:03:42.195 "max_connections_per_session": 2, 00:03:42.195 "max_queue_depth": 64, 00:03:42.195 "default_time2wait": 2, 00:03:42.195 "default_time2retain": 20, 00:03:42.195 "first_burst_length": 8192, 00:03:42.195 "immediate_data": true, 00:03:42.195 "allow_duplicated_isid": false, 00:03:42.195 "error_recovery_level": 0, 00:03:42.195 "nop_timeout": 60, 00:03:42.195 "nop_in_interval": 30, 00:03:42.195 "disable_chap": false, 00:03:42.195 "require_chap": false, 00:03:42.195 "mutual_chap": false, 00:03:42.195 "chap_group": 0, 00:03:42.195 "max_large_datain_per_connection": 64, 00:03:42.195 "max_r2t_per_connection": 4, 00:03:42.195 "pdu_pool_size": 36864, 00:03:42.195 "immediate_data_pool_size": 16384, 00:03:42.195 "data_out_pool_size": 2048 00:03:42.195 } 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 } 00:03:42.195 ] 00:03:42.195 } 00:03:42.195 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:42.195 14:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1235217 00:03:42.195 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1235217 ']' 00:03:42.195 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1235217 00:03:42.195 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235217 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235217' 00:03:42.196 killing process with pid 1235217 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1235217 00:03:42.196 14:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1235217 00:03:42.454 14:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1235238 00:03:42.454 14:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.454 14:56:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1235238 ']' 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235238' 00:03:47.728 killing process with pid 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1235238 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.728 00:03:47.728 real 0m6.273s 00:03:47.728 user 0m5.976s 00:03:47.728 sys 0m0.596s 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.728 14:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.728 ************************************ 00:03:47.728 END TEST skip_rpc_with_json 00:03:47.728 ************************************ 00:03:47.728 14:56:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.728 14:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.728 14:56:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.728 14:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.988 ************************************ 00:03:47.988 START TEST skip_rpc_with_delay 00:03:47.988 ************************************ 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.988 [2024-12-09 14:56:49.599813] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:47.988 00:03:47.988 real 0m0.071s 00:03:47.988 user 0m0.047s 00:03:47.988 sys 0m0.023s 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.988 14:56:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.988 ************************************ 00:03:47.988 END TEST skip_rpc_with_delay 00:03:47.988 ************************************ 00:03:47.988 14:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.988 14:56:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.988 14:56:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.988 14:56:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.988 14:56:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.988 14:56:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.988 ************************************ 00:03:47.988 START TEST exit_on_failed_rpc_init 00:03:47.988 ************************************ 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1236235 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1236235 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1236235 ']' 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.988 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.989 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.989 14:56:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.989 [2024-12-09 14:56:49.740401] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:47.989 [2024-12-09 14:56:49.740441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236235 ] 00:03:48.248 [2024-12-09 14:56:49.813392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.248 [2024-12-09 14:56:49.854164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.507 [2024-12-09 14:56:50.129039] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:48.507 [2024-12-09 14:56:50.129084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236420 ] 00:03:48.507 [2024-12-09 14:56:50.203992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.507 [2024-12-09 14:56:50.243317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.507 [2024-12-09 14:56:50.243371] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:48.507 [2024-12-09 14:56:50.243380] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:48.507 [2024-12-09 14:56:50.243386] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.507 14:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1236235 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1236235 ']' 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1236235 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.508 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1236235 00:03:48.767 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.767 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.767 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1236235' 00:03:48.767 killing process with pid 1236235 00:03:48.767 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1236235 00:03:48.767 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1236235 00:03:49.027 00:03:49.027 real 0m0.953s 00:03:49.027 user 0m0.995s 00:03:49.027 sys 0m0.406s 00:03:49.027 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.027 14:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.027 ************************************ 00:03:49.027 END TEST exit_on_failed_rpc_init 00:03:49.027 ************************************ 00:03:49.027 14:56:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.027 00:03:49.027 real 0m13.128s 00:03:49.027 user 0m12.340s 00:03:49.027 sys 0m1.604s 00:03:49.027 14:56:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.027 14:56:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.027 ************************************ 00:03:49.027 END TEST skip_rpc 00:03:49.027 ************************************ 00:03:49.027 14:56:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.027 14:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.027 14:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.027 14:56:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.027 ************************************ 00:03:49.027 START TEST rpc_client 00:03:49.027 ************************************ 00:03:49.027 14:56:50 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.287 * Looking for test storage... 00:03:49.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:49.287 14:56:50 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.287 14:56:50 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.287 14:56:50 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.287 14:56:50 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:49.287 14:56:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.288 14:56:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.288 --rc genhtml_branch_coverage=1 00:03:49.288 --rc genhtml_function_coverage=1 00:03:49.288 --rc genhtml_legend=1 00:03:49.288 --rc geninfo_all_blocks=1 00:03:49.288 --rc geninfo_unexecuted_blocks=1 00:03:49.288 00:03:49.288 ' 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.288 --rc genhtml_branch_coverage=1 00:03:49.288 --rc genhtml_function_coverage=1 00:03:49.288 --rc genhtml_legend=1 00:03:49.288 --rc geninfo_all_blocks=1 00:03:49.288 --rc geninfo_unexecuted_blocks=1 00:03:49.288 00:03:49.288 ' 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.288 --rc genhtml_branch_coverage=1 00:03:49.288 --rc genhtml_function_coverage=1 00:03:49.288 --rc genhtml_legend=1 00:03:49.288 --rc geninfo_all_blocks=1 00:03:49.288 --rc geninfo_unexecuted_blocks=1 00:03:49.288 00:03:49.288 ' 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.288 --rc genhtml_branch_coverage=1 00:03:49.288 --rc genhtml_function_coverage=1 00:03:49.288 --rc genhtml_legend=1 00:03:49.288 --rc geninfo_all_blocks=1 00:03:49.288 --rc geninfo_unexecuted_blocks=1 00:03:49.288 00:03:49.288 ' 00:03:49.288 14:56:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:49.288 OK 00:03:49.288 14:56:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:49.288 00:03:49.288 real 0m0.200s 00:03:49.288 user 0m0.122s 00:03:49.288 sys 0m0.090s 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.288 14:56:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:49.288 ************************************ 00:03:49.288 END TEST rpc_client 00:03:49.288 ************************************ 00:03:49.288 14:56:50 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.288 14:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.288 14:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.288 14:56:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.288 ************************************ 00:03:49.288 START TEST json_config 00:03:49.288 ************************************ 00:03:49.288 14:56:51 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.288 14:56:51 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.288 14:56:51 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.548 14:56:51 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.548 14:56:51 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.548 14:56:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.548 14:56:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.548 14:56:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.548 14:56:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.549 14:56:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.549 14:56:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.549 14:56:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.549 14:56:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:49.549 14:56:51 json_config -- scripts/common.sh@345 -- # : 1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.549 14:56:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.549 14:56:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@353 -- # local d=1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.549 14:56:51 json_config -- scripts/common.sh@355 -- # echo 1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.549 14:56:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@353 -- # local d=2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.549 14:56:51 json_config -- scripts/common.sh@355 -- # echo 2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.549 14:56:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.549 14:56:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.549 14:56:51 json_config -- scripts/common.sh@368 -- # return 0 00:03:49.549 14:56:51 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.549 14:56:51 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.549 --rc genhtml_branch_coverage=1 00:03:49.549 --rc genhtml_function_coverage=1 00:03:49.549 --rc genhtml_legend=1 00:03:49.549 --rc geninfo_all_blocks=1 00:03:49.549 --rc geninfo_unexecuted_blocks=1 00:03:49.549 00:03:49.549 ' 00:03:49.549 14:56:51 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.549 --rc genhtml_branch_coverage=1 00:03:49.549 --rc genhtml_function_coverage=1 00:03:49.549 --rc genhtml_legend=1 00:03:49.549 --rc geninfo_all_blocks=1 00:03:49.549 --rc geninfo_unexecuted_blocks=1 00:03:49.549 00:03:49.549 ' 00:03:49.549 14:56:51 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.549 --rc genhtml_branch_coverage=1 00:03:49.549 --rc genhtml_function_coverage=1 00:03:49.549 --rc genhtml_legend=1 00:03:49.549 --rc geninfo_all_blocks=1 00:03:49.549 --rc geninfo_unexecuted_blocks=1 00:03:49.549 00:03:49.549 ' 00:03:49.549 14:56:51 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.549 --rc genhtml_branch_coverage=1 00:03:49.549 --rc genhtml_function_coverage=1 00:03:49.549 --rc genhtml_legend=1 00:03:49.549 --rc geninfo_all_blocks=1 00:03:49.549 --rc geninfo_unexecuted_blocks=1 00:03:49.549 00:03:49.549 ' 00:03:49.549 14:56:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.549 14:56:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.549 14:56:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.549 14:56:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.549 14:56:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.549 14:56:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.549 14:56:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.549 14:56:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.549 14:56:51 json_config -- paths/export.sh@5 -- # export PATH 00:03:49.549 14:56:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.549 14:56:51 json_config -- nvmf/common.sh@51 -- # : 0 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.550 14:56:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:49.550 INFO: JSON configuration test init 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.550 14:56:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:49.550 14:56:51 json_config -- json_config/common.sh@9 -- # local app=target 00:03:49.550 14:56:51 json_config -- json_config/common.sh@10 -- # shift 00:03:49.550 14:56:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:49.550 14:56:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:49.550 14:56:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:49.550 14:56:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.550 14:56:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.550 14:56:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1236689 00:03:49.550 14:56:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:49.550 Waiting for target to run... 00:03:49.550 14:56:51 json_config -- json_config/common.sh@25 -- # waitforlisten 1236689 /var/tmp/spdk_tgt.sock 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 1236689 ']' 00:03:49.550 14:56:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:49.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.550 14:56:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.550 [2024-12-09 14:56:51.267317] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:49.550 [2024-12-09 14:56:51.267365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236689 ] 00:03:50.119 [2024-12-09 14:56:51.721957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.119 [2024-12-09 14:56:51.778098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:50.378 14:56:52 json_config -- json_config/common.sh@26 -- # echo '' 00:03:50.378 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.378 14:56:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:50.378 14:56:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:50.378 14:56:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:53.667 14:56:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.667 14:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:53.667 14:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@54 -- # sort 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:53.667 14:56:55 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:53.667 14:56:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.667 14:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:53.926 14:56:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.926 14:56:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.926 14:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.926 MallocForNvmf0 00:03:53.926 14:56:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.926 14:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.184 MallocForNvmf1 00:03:54.185 14:56:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.185 14:56:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.443 [2024-12-09 14:56:56.041385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.443 14:56:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.443 14:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.702 14:56:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.702 14:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.702 14:56:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.702 14:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.961 14:56:56 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:54.961 14:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.219 [2024-12-09 14:56:56.839788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.219 14:56:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:55.219 14:56:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.219 14:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.219 14:56:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:55.219 14:56:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.219 14:56:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.219 14:56:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:55.219 14:56:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.219 14:56:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.478 MallocBdevForConfigChangeCheck 00:03:55.478 14:56:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:55.478 14:56:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.478 14:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.478 14:56:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:55.478 14:56:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.737 14:56:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:55.737 INFO: shutting down applications... 00:03:55.737 14:56:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:55.737 14:56:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:55.737 14:56:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:55.737 14:56:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:57.641 Calling clear_iscsi_subsystem 00:03:57.641 Calling clear_nvmf_subsystem 00:03:57.641 Calling clear_nbd_subsystem 00:03:57.641 Calling clear_ublk_subsystem 00:03:57.641 Calling clear_vhost_blk_subsystem 00:03:57.641 Calling clear_vhost_scsi_subsystem 00:03:57.641 Calling clear_bdev_subsystem 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@352 -- # break 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:57.641 14:56:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:57.641 14:56:59 json_config -- json_config/common.sh@31 -- # local app=target 00:03:57.641 14:56:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.641 14:56:59 json_config -- json_config/common.sh@35 -- # [[ -n 1236689 ]] 00:03:57.641 14:56:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1236689 00:03:57.641 14:56:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.641 14:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.641 14:56:59 json_config -- json_config/common.sh@41 -- # kill -0 1236689 00:03:57.641 14:56:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.210 14:56:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.210 14:56:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.210 14:56:59 json_config -- json_config/common.sh@41 -- # kill -0 1236689 00:03:58.210 14:56:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.210 14:56:59 json_config -- json_config/common.sh@43 -- # break 00:03:58.210 14:56:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.210 14:56:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.210 SPDK target shutdown done 00:03:58.210 14:56:59 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:58.210 INFO: relaunching applications... 00:03:58.210 14:56:59 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.210 14:56:59 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.210 14:56:59 json_config -- json_config/common.sh@10 -- # shift 00:03:58.210 14:56:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.210 14:56:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.210 14:56:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.210 14:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.210 14:56:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.210 14:56:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1238273 00:03:58.210 14:56:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.210 Waiting for target to run... 00:03:58.210 14:56:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.210 14:56:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1238273 /var/tmp/spdk_tgt.sock 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 1238273 ']' 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.210 14:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.211 [2024-12-09 14:56:59.961710] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:03:58.211 [2024-12-09 14:56:59.961762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238273 ] 00:03:58.779 [2024-12-09 14:57:00.422299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.779 [2024-12-09 14:57:00.479114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.069 [2024-12-09 14:57:03.511334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.069 [2024-12-09 14:57:03.543601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:02.637 14:57:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.637 14:57:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:02.637 14:57:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:02.637 00:04:02.637 14:57:04 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:02.637 14:57:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:02.637 INFO: Checking if target configuration is the same... 00:04:02.637 14:57:04 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.637 14:57:04 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:02.637 14:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.637 + '[' 2 -ne 2 ']' 00:04:02.637 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:02.637 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:02.637 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.637 +++ basename /dev/fd/62 00:04:02.637 ++ mktemp /tmp/62.XXX 00:04:02.637 + tmp_file_1=/tmp/62.Cto 00:04:02.637 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.637 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:02.637 + tmp_file_2=/tmp/spdk_tgt_config.json.TzD 00:04:02.637 + ret=0 00:04:02.637 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.896 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.896 + diff -u /tmp/62.Cto /tmp/spdk_tgt_config.json.TzD 00:04:02.896 + echo 'INFO: JSON config files are the same' 00:04:02.896 INFO: JSON config files are the same 00:04:02.896 + rm /tmp/62.Cto /tmp/spdk_tgt_config.json.TzD 00:04:02.896 + exit 0 00:04:02.896 14:57:04 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:02.896 14:57:04 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:02.896 INFO: changing configuration and checking if this can be detected... 00:04:02.896 14:57:04 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:02.896 14:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.155 14:57:04 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.155 14:57:04 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:03.155 14:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.155 + '[' 2 -ne 2 ']' 00:04:03.155 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.155 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.155 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.155 +++ basename /dev/fd/62 00:04:03.155 ++ mktemp /tmp/62.XXX 00:04:03.155 + tmp_file_1=/tmp/62.Jiu 00:04:03.155 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.155 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.155 + tmp_file_2=/tmp/spdk_tgt_config.json.YeO 00:04:03.155 + ret=0 00:04:03.155 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.414 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.414 + diff -u /tmp/62.Jiu /tmp/spdk_tgt_config.json.YeO 00:04:03.414 + ret=1 00:04:03.414 + echo '=== Start of file: /tmp/62.Jiu ===' 00:04:03.414 + cat /tmp/62.Jiu 00:04:03.414 + echo '=== End of file: /tmp/62.Jiu ===' 00:04:03.414 + echo '' 00:04:03.414 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YeO ===' 00:04:03.414 + cat /tmp/spdk_tgt_config.json.YeO 00:04:03.414 + echo '=== End of file: /tmp/spdk_tgt_config.json.YeO ===' 00:04:03.414 + echo '' 00:04:03.414 + rm /tmp/62.Jiu /tmp/spdk_tgt_config.json.YeO 00:04:03.414 + exit 1 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:03.414 INFO: configuration change detected. 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:03.414 14:57:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.414 14:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@324 -- # [[ -n 1238273 ]] 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:03.414 14:57:05 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:03.414 14:57:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.414 14:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.674 14:57:05 json_config -- json_config/json_config.sh@330 -- # killprocess 1238273 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@954 -- # '[' -z 1238273 ']' 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@958 -- # kill -0 1238273 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@959 -- # uname 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1238273 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1238273' 00:04:03.674 killing process with pid 1238273 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@973 -- # kill 1238273 00:04:03.674 14:57:05 json_config -- common/autotest_common.sh@978 -- # wait 1238273 00:04:05.053 14:57:06 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.053 14:57:06 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:05.053 14:57:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.053 14:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.053 14:57:06 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:05.053 14:57:06 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:05.053 INFO: Success 00:04:05.053 00:04:05.053 real 0m15.833s 00:04:05.053 user 0m16.335s 00:04:05.053 sys 0m2.682s 00:04:05.312 14:57:06 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.312 14:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 END TEST json_config 00:04:05.312 ************************************ 00:04:05.312 14:57:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.312 14:57:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.312 14:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.312 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 ************************************ 00:04:05.312 START TEST json_config_extra_key 00:04:05.312 ************************************ 00:04:05.312 14:57:06 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.312 14:57:06 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.312 14:57:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.312 14:57:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.312 14:57:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:05.312 14:57:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:05.313 14:57:07 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.313 14:57:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.313 --rc genhtml_branch_coverage=1 00:04:05.313 --rc genhtml_function_coverage=1 00:04:05.313 --rc genhtml_legend=1 00:04:05.313 --rc geninfo_all_blocks=1 00:04:05.313 --rc geninfo_unexecuted_blocks=1 00:04:05.313 00:04:05.313 ' 00:04:05.313 14:57:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.313 --rc genhtml_branch_coverage=1 00:04:05.313 --rc genhtml_function_coverage=1 00:04:05.313 --rc genhtml_legend=1 00:04:05.313 --rc geninfo_all_blocks=1 00:04:05.313 --rc geninfo_unexecuted_blocks=1 00:04:05.313 00:04:05.313 ' 00:04:05.313 14:57:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.313 --rc genhtml_branch_coverage=1 00:04:05.313 --rc genhtml_function_coverage=1 00:04:05.313 --rc genhtml_legend=1 00:04:05.313 --rc geninfo_all_blocks=1 00:04:05.313 --rc geninfo_unexecuted_blocks=1 00:04:05.313 00:04:05.313 ' 00:04:05.313 14:57:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.313 --rc genhtml_branch_coverage=1 00:04:05.313 --rc genhtml_function_coverage=1 00:04:05.313 --rc genhtml_legend=1 00:04:05.313 --rc geninfo_all_blocks=1 00:04:05.313 --rc geninfo_unexecuted_blocks=1 00:04:05.313 00:04:05.313 ' 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.313 14:57:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.313 14:57:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.313 14:57:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.313 14:57:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.313 14:57:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:05.313 14:57:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.313 14:57:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:05.313 INFO: launching applications... 00:04:05.313 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.313 14:57:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.314 14:57:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1239957 00:04:05.314 14:57:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:05.314 Waiting for target to run... 00:04:05.314 14:57:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1239957 /var/tmp/spdk_tgt.sock 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1239957 ']' 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:05.314 14:57:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.314 14:57:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.573 [2024-12-09 14:57:07.150987] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:05.573 [2024-12-09 14:57:07.151037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239957 ] 00:04:05.831 [2024-12-09 14:57:07.610836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.089 [2024-12-09 14:57:07.657709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.349 14:57:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.349 14:57:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:06.349 00:04:06.349 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:06.349 INFO: shutting down applications... 00:04:06.349 14:57:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1239957 ]] 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1239957 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1239957 00:04:06.349 14:57:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1239957 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.917 14:57:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.917 SPDK target shutdown done 00:04:06.917 14:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:06.917 Success 00:04:06.917 00:04:06.917 real 0m1.570s 00:04:06.917 user 0m1.179s 00:04:06.917 sys 0m0.566s 00:04:06.918 14:57:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.918 14:57:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.918 ************************************ 00:04:06.918 END TEST json_config_extra_key 00:04:06.918 ************************************ 00:04:06.918 14:57:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.918 14:57:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.918 14:57:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.918 14:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:06.918 ************************************ 00:04:06.918 START TEST alias_rpc 00:04:06.918 ************************************ 00:04:06.918 14:57:08 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.918 * Looking for test storage... 00:04:06.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:06.918 14:57:08 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.918 14:57:08 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.918 14:57:08 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.177 14:57:08 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.177 14:57:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.178 14:57:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.178 --rc genhtml_branch_coverage=1 00:04:07.178 --rc genhtml_function_coverage=1 00:04:07.178 --rc genhtml_legend=1 00:04:07.178 --rc geninfo_all_blocks=1 00:04:07.178 --rc geninfo_unexecuted_blocks=1 00:04:07.178 00:04:07.178 ' 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.178 --rc genhtml_branch_coverage=1 00:04:07.178 --rc genhtml_function_coverage=1 00:04:07.178 --rc genhtml_legend=1 00:04:07.178 --rc geninfo_all_blocks=1 00:04:07.178 --rc geninfo_unexecuted_blocks=1 00:04:07.178 00:04:07.178 ' 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.178 --rc genhtml_branch_coverage=1 00:04:07.178 --rc genhtml_function_coverage=1 00:04:07.178 --rc genhtml_legend=1 00:04:07.178 --rc geninfo_all_blocks=1 00:04:07.178 --rc geninfo_unexecuted_blocks=1 00:04:07.178 00:04:07.178 ' 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.178 --rc genhtml_branch_coverage=1 00:04:07.178 --rc genhtml_function_coverage=1 00:04:07.178 --rc genhtml_legend=1 00:04:07.178 --rc geninfo_all_blocks=1 00:04:07.178 --rc geninfo_unexecuted_blocks=1 00:04:07.178 00:04:07.178 ' 00:04:07.178 14:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.178 14:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1240374 00:04:07.178 14:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.178 14:57:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1240374 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1240374 ']' 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.178 14:57:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.178 [2024-12-09 14:57:08.784084] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:07.178 [2024-12-09 14:57:08.784131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240374 ] 00:04:07.178 [2024-12-09 14:57:08.855638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.178 [2024-12-09 14:57:08.894744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.437 14:57:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.437 14:57:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:07.437 14:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:07.697 14:57:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1240374 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1240374 ']' 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1240374 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1240374 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1240374' 00:04:07.697 killing process with pid 1240374 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 1240374 00:04:07.697 14:57:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 1240374 00:04:07.956 00:04:07.956 real 0m1.132s 00:04:07.956 user 0m1.149s 00:04:07.956 sys 0m0.422s 00:04:07.956 14:57:09 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.956 14:57:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.956 ************************************ 00:04:07.956 END TEST alias_rpc 00:04:07.956 ************************************ 00:04:07.956 14:57:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:07.956 14:57:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:07.956 14:57:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.956 14:57:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.956 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.216 ************************************ 00:04:08.216 START TEST spdkcli_tcp 00:04:08.216 ************************************ 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.216 * Looking for test storage... 00:04:08.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.216 14:57:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.216 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.216 --rc genhtml_branch_coverage=1 00:04:08.216 --rc genhtml_function_coverage=1 00:04:08.216 --rc genhtml_legend=1 00:04:08.216 --rc geninfo_all_blocks=1 00:04:08.216 --rc geninfo_unexecuted_blocks=1 00:04:08.216 00:04:08.216 ' 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.217 --rc genhtml_branch_coverage=1 00:04:08.217 --rc genhtml_function_coverage=1 00:04:08.217 --rc genhtml_legend=1 00:04:08.217 --rc geninfo_all_blocks=1 00:04:08.217 --rc geninfo_unexecuted_blocks=1 00:04:08.217 00:04:08.217 ' 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.217 --rc genhtml_branch_coverage=1 00:04:08.217 --rc genhtml_function_coverage=1 00:04:08.217 --rc genhtml_legend=1 00:04:08.217 --rc geninfo_all_blocks=1 00:04:08.217 --rc geninfo_unexecuted_blocks=1 00:04:08.217 00:04:08.217 ' 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.217 --rc genhtml_branch_coverage=1 00:04:08.217 --rc genhtml_function_coverage=1 00:04:08.217 --rc genhtml_legend=1 00:04:08.217 --rc geninfo_all_blocks=1 00:04:08.217 --rc geninfo_unexecuted_blocks=1 00:04:08.217 00:04:08.217 ' 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1240620 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1240620 00:04:08.217 14:57:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1240620 ']' 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.217 14:57:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.217 [2024-12-09 14:57:09.994034] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:08.217 [2024-12-09 14:57:09.994081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240620 ] 00:04:08.477 [2024-12-09 14:57:10.069536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.477 [2024-12-09 14:57:10.112706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.477 [2024-12-09 14:57:10.112708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.736 14:57:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.737 14:57:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:08.737 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1240829 00:04:08.737 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:08.737 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:08.737 [ 00:04:08.737 "bdev_malloc_delete", 00:04:08.737 "bdev_malloc_create", 00:04:08.737 "bdev_null_resize", 00:04:08.737 "bdev_null_delete", 00:04:08.737 "bdev_null_create", 00:04:08.737 "bdev_nvme_cuse_unregister", 00:04:08.737 "bdev_nvme_cuse_register", 00:04:08.737 "bdev_opal_new_user", 00:04:08.737 "bdev_opal_set_lock_state", 00:04:08.737 "bdev_opal_delete", 00:04:08.737 "bdev_opal_get_info", 00:04:08.737 "bdev_opal_create", 00:04:08.737 "bdev_nvme_opal_revert", 00:04:08.737 "bdev_nvme_opal_init", 00:04:08.737 "bdev_nvme_send_cmd", 00:04:08.737 "bdev_nvme_set_keys", 00:04:08.737 "bdev_nvme_get_path_iostat", 00:04:08.737 "bdev_nvme_get_mdns_discovery_info", 00:04:08.737 "bdev_nvme_stop_mdns_discovery", 00:04:08.737 "bdev_nvme_start_mdns_discovery", 00:04:08.737 "bdev_nvme_set_multipath_policy", 00:04:08.737 "bdev_nvme_set_preferred_path", 00:04:08.737 "bdev_nvme_get_io_paths", 00:04:08.737 "bdev_nvme_remove_error_injection", 00:04:08.737 "bdev_nvme_add_error_injection", 00:04:08.737 "bdev_nvme_get_discovery_info", 00:04:08.737 "bdev_nvme_stop_discovery", 00:04:08.737 "bdev_nvme_start_discovery", 00:04:08.737 "bdev_nvme_get_controller_health_info", 00:04:08.737 "bdev_nvme_disable_controller", 00:04:08.737 "bdev_nvme_enable_controller", 00:04:08.737 "bdev_nvme_reset_controller", 00:04:08.737 "bdev_nvme_get_transport_statistics", 00:04:08.737 "bdev_nvme_apply_firmware", 00:04:08.737 "bdev_nvme_detach_controller", 00:04:08.737 "bdev_nvme_get_controllers", 00:04:08.737 "bdev_nvme_attach_controller", 00:04:08.737 "bdev_nvme_set_hotplug", 00:04:08.737 "bdev_nvme_set_options", 00:04:08.737 "bdev_passthru_delete", 00:04:08.737 "bdev_passthru_create", 00:04:08.737 "bdev_lvol_set_parent_bdev", 00:04:08.737 "bdev_lvol_set_parent", 00:04:08.737 "bdev_lvol_check_shallow_copy", 00:04:08.737 "bdev_lvol_start_shallow_copy", 00:04:08.737 "bdev_lvol_grow_lvstore", 00:04:08.737 "bdev_lvol_get_lvols", 00:04:08.737 "bdev_lvol_get_lvstores", 00:04:08.737 "bdev_lvol_delete", 00:04:08.737 "bdev_lvol_set_read_only", 00:04:08.737 "bdev_lvol_resize", 00:04:08.737 "bdev_lvol_decouple_parent", 00:04:08.737 "bdev_lvol_inflate", 00:04:08.737 "bdev_lvol_rename", 00:04:08.737 "bdev_lvol_clone_bdev", 00:04:08.737 "bdev_lvol_clone", 00:04:08.737 "bdev_lvol_snapshot", 00:04:08.737 "bdev_lvol_create", 00:04:08.737 "bdev_lvol_delete_lvstore", 00:04:08.737 "bdev_lvol_rename_lvstore", 00:04:08.737 "bdev_lvol_create_lvstore", 00:04:08.737 "bdev_raid_set_options", 00:04:08.737 "bdev_raid_remove_base_bdev", 00:04:08.737 "bdev_raid_add_base_bdev", 00:04:08.737 "bdev_raid_delete", 00:04:08.737 "bdev_raid_create", 00:04:08.737 "bdev_raid_get_bdevs", 00:04:08.737 "bdev_error_inject_error", 00:04:08.737 "bdev_error_delete", 00:04:08.737 "bdev_error_create", 00:04:08.737 "bdev_split_delete", 00:04:08.737 "bdev_split_create", 00:04:08.737 "bdev_delay_delete", 00:04:08.737 "bdev_delay_create", 00:04:08.737 "bdev_delay_update_latency", 00:04:08.737 "bdev_zone_block_delete", 00:04:08.737 "bdev_zone_block_create", 00:04:08.737 "blobfs_create", 00:04:08.737 "blobfs_detect", 00:04:08.737 "blobfs_set_cache_size", 00:04:08.737 "bdev_aio_delete", 00:04:08.737 "bdev_aio_rescan", 00:04:08.737 "bdev_aio_create", 00:04:08.737 "bdev_ftl_set_property", 00:04:08.737 "bdev_ftl_get_properties", 00:04:08.737 "bdev_ftl_get_stats", 00:04:08.737 "bdev_ftl_unmap", 00:04:08.737 "bdev_ftl_unload", 00:04:08.737 "bdev_ftl_delete", 00:04:08.737 "bdev_ftl_load", 00:04:08.737 "bdev_ftl_create", 00:04:08.737 "bdev_virtio_attach_controller", 00:04:08.737 "bdev_virtio_scsi_get_devices", 00:04:08.737 "bdev_virtio_detach_controller", 00:04:08.737 "bdev_virtio_blk_set_hotplug", 00:04:08.737 "bdev_iscsi_delete", 00:04:08.737 "bdev_iscsi_create", 00:04:08.737 "bdev_iscsi_set_options", 00:04:08.737 "accel_error_inject_error", 00:04:08.737 "ioat_scan_accel_module", 00:04:08.737 "dsa_scan_accel_module", 00:04:08.737 "iaa_scan_accel_module", 00:04:08.737 "vfu_virtio_create_fs_endpoint", 00:04:08.737 "vfu_virtio_create_scsi_endpoint", 00:04:08.737 "vfu_virtio_scsi_remove_target", 00:04:08.737 "vfu_virtio_scsi_add_target", 00:04:08.737 "vfu_virtio_create_blk_endpoint", 00:04:08.737 "vfu_virtio_delete_endpoint", 00:04:08.737 "keyring_file_remove_key", 00:04:08.737 "keyring_file_add_key", 00:04:08.737 "keyring_linux_set_options", 00:04:08.737 "fsdev_aio_delete", 00:04:08.737 "fsdev_aio_create", 00:04:08.737 "iscsi_get_histogram", 00:04:08.737 "iscsi_enable_histogram", 00:04:08.737 "iscsi_set_options", 00:04:08.737 "iscsi_get_auth_groups", 00:04:08.737 "iscsi_auth_group_remove_secret", 00:04:08.737 "iscsi_auth_group_add_secret", 00:04:08.737 "iscsi_delete_auth_group", 00:04:08.737 "iscsi_create_auth_group", 00:04:08.737 "iscsi_set_discovery_auth", 00:04:08.737 "iscsi_get_options", 00:04:08.737 "iscsi_target_node_request_logout", 00:04:08.737 "iscsi_target_node_set_redirect", 00:04:08.737 "iscsi_target_node_set_auth", 00:04:08.737 "iscsi_target_node_add_lun", 00:04:08.737 "iscsi_get_stats", 00:04:08.737 "iscsi_get_connections", 00:04:08.737 "iscsi_portal_group_set_auth", 00:04:08.737 "iscsi_start_portal_group", 00:04:08.737 "iscsi_delete_portal_group", 00:04:08.737 "iscsi_create_portal_group", 00:04:08.737 "iscsi_get_portal_groups", 00:04:08.737 "iscsi_delete_target_node", 00:04:08.737 "iscsi_target_node_remove_pg_ig_maps", 00:04:08.737 "iscsi_target_node_add_pg_ig_maps", 00:04:08.737 "iscsi_create_target_node", 00:04:08.737 "iscsi_get_target_nodes", 00:04:08.737 "iscsi_delete_initiator_group", 00:04:08.737 "iscsi_initiator_group_remove_initiators", 00:04:08.737 "iscsi_initiator_group_add_initiators", 00:04:08.737 "iscsi_create_initiator_group", 00:04:08.737 "iscsi_get_initiator_groups", 00:04:08.737 "nvmf_set_crdt", 00:04:08.737 "nvmf_set_config", 00:04:08.737 "nvmf_set_max_subsystems", 00:04:08.737 "nvmf_stop_mdns_prr", 00:04:08.737 "nvmf_publish_mdns_prr", 00:04:08.737 "nvmf_subsystem_get_listeners", 00:04:08.737 "nvmf_subsystem_get_qpairs", 00:04:08.737 "nvmf_subsystem_get_controllers", 00:04:08.737 "nvmf_get_stats", 00:04:08.737 "nvmf_get_transports", 00:04:08.737 "nvmf_create_transport", 00:04:08.737 "nvmf_get_targets", 00:04:08.737 "nvmf_delete_target", 00:04:08.737 "nvmf_create_target", 00:04:08.737 "nvmf_subsystem_allow_any_host", 00:04:08.737 "nvmf_subsystem_set_keys", 00:04:08.737 "nvmf_subsystem_remove_host", 00:04:08.737 "nvmf_subsystem_add_host", 00:04:08.737 "nvmf_ns_remove_host", 00:04:08.737 "nvmf_ns_add_host", 00:04:08.737 "nvmf_subsystem_remove_ns", 00:04:08.737 "nvmf_subsystem_set_ns_ana_group", 00:04:08.737 "nvmf_subsystem_add_ns", 00:04:08.737 "nvmf_subsystem_listener_set_ana_state", 00:04:08.737 "nvmf_discovery_get_referrals", 00:04:08.737 "nvmf_discovery_remove_referral", 00:04:08.737 "nvmf_discovery_add_referral", 00:04:08.737 "nvmf_subsystem_remove_listener", 00:04:08.737 "nvmf_subsystem_add_listener", 00:04:08.737 "nvmf_delete_subsystem", 00:04:08.737 "nvmf_create_subsystem", 00:04:08.737 "nvmf_get_subsystems", 00:04:08.737 "env_dpdk_get_mem_stats", 00:04:08.737 "nbd_get_disks", 00:04:08.737 "nbd_stop_disk", 00:04:08.737 "nbd_start_disk", 00:04:08.737 "ublk_recover_disk", 00:04:08.737 "ublk_get_disks", 00:04:08.737 "ublk_stop_disk", 00:04:08.737 "ublk_start_disk", 00:04:08.737 "ublk_destroy_target", 00:04:08.737 "ublk_create_target", 00:04:08.737 "virtio_blk_create_transport", 00:04:08.737 "virtio_blk_get_transports", 00:04:08.737 "vhost_controller_set_coalescing", 00:04:08.737 "vhost_get_controllers", 00:04:08.737 "vhost_delete_controller", 00:04:08.737 "vhost_create_blk_controller", 00:04:08.737 "vhost_scsi_controller_remove_target", 00:04:08.737 "vhost_scsi_controller_add_target", 00:04:08.737 "vhost_start_scsi_controller", 00:04:08.737 "vhost_create_scsi_controller", 00:04:08.737 "thread_set_cpumask", 00:04:08.738 "scheduler_set_options", 00:04:08.738 "framework_get_governor", 00:04:08.738 "framework_get_scheduler", 00:04:08.738 "framework_set_scheduler", 00:04:08.738 "framework_get_reactors", 00:04:08.738 "thread_get_io_channels", 00:04:08.738 "thread_get_pollers", 00:04:08.738 "thread_get_stats", 00:04:08.738 "framework_monitor_context_switch", 00:04:08.738 "spdk_kill_instance", 00:04:08.738 "log_enable_timestamps", 00:04:08.738 "log_get_flags", 00:04:08.738 "log_clear_flag", 00:04:08.738 "log_set_flag", 00:04:08.738 "log_get_level", 00:04:08.738 "log_set_level", 00:04:08.738 "log_get_print_level", 00:04:08.738 "log_set_print_level", 00:04:08.738 "framework_enable_cpumask_locks", 00:04:08.738 "framework_disable_cpumask_locks", 00:04:08.738 "framework_wait_init", 00:04:08.738 "framework_start_init", 00:04:08.738 "scsi_get_devices", 00:04:08.738 "bdev_get_histogram", 00:04:08.738 "bdev_enable_histogram", 00:04:08.738 "bdev_set_qos_limit", 00:04:08.738 "bdev_set_qd_sampling_period", 00:04:08.738 "bdev_get_bdevs", 00:04:08.738 "bdev_reset_iostat", 00:04:08.738 "bdev_get_iostat", 00:04:08.738 "bdev_examine", 00:04:08.738 "bdev_wait_for_examine", 00:04:08.738 "bdev_set_options", 00:04:08.738 "accel_get_stats", 00:04:08.738 "accel_set_options", 00:04:08.738 "accel_set_driver", 00:04:08.738 "accel_crypto_key_destroy", 00:04:08.738 "accel_crypto_keys_get", 00:04:08.738 "accel_crypto_key_create", 00:04:08.738 "accel_assign_opc", 00:04:08.738 "accel_get_module_info", 00:04:08.738 "accel_get_opc_assignments", 00:04:08.738 "vmd_rescan", 00:04:08.738 "vmd_remove_device", 00:04:08.738 "vmd_enable", 00:04:08.738 "sock_get_default_impl", 00:04:08.738 "sock_set_default_impl", 00:04:08.738 "sock_impl_set_options", 00:04:08.738 "sock_impl_get_options", 00:04:08.738 "iobuf_get_stats", 00:04:08.738 "iobuf_set_options", 00:04:08.738 "keyring_get_keys", 00:04:08.738 "vfu_tgt_set_base_path", 00:04:08.738 "framework_get_pci_devices", 00:04:08.738 "framework_get_config", 00:04:08.738 "framework_get_subsystems", 00:04:08.738 "fsdev_set_opts", 00:04:08.738 "fsdev_get_opts", 00:04:08.738 "trace_get_info", 00:04:08.738 "trace_get_tpoint_group_mask", 00:04:08.738 "trace_disable_tpoint_group", 00:04:08.738 "trace_enable_tpoint_group", 00:04:08.738 "trace_clear_tpoint_mask", 00:04:08.738 "trace_set_tpoint_mask", 00:04:08.738 "notify_get_notifications", 00:04:08.738 "notify_get_types", 00:04:08.738 "spdk_get_version", 00:04:08.738 "rpc_get_methods" 00:04:08.738 ] 00:04:08.998 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.998 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:08.998 14:57:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1240620 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1240620 ']' 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1240620 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1240620 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1240620' 00:04:08.998 killing process with pid 1240620 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1240620 00:04:08.998 14:57:10 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1240620 00:04:09.257 00:04:09.257 real 0m1.164s 00:04:09.257 user 0m1.994s 00:04:09.257 sys 0m0.429s 00:04:09.257 14:57:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.257 14:57:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.257 ************************************ 00:04:09.257 END TEST spdkcli_tcp 00:04:09.257 ************************************ 00:04:09.257 14:57:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.257 14:57:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.257 14:57:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.257 14:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:09.257 ************************************ 00:04:09.257 START TEST dpdk_mem_utility 00:04:09.257 ************************************ 00:04:09.257 14:57:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.517 * Looking for test storage... 00:04:09.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.517 14:57:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.517 --rc genhtml_branch_coverage=1 00:04:09.517 --rc genhtml_function_coverage=1 00:04:09.517 --rc genhtml_legend=1 00:04:09.517 --rc geninfo_all_blocks=1 00:04:09.517 --rc geninfo_unexecuted_blocks=1 00:04:09.517 00:04:09.517 ' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.517 --rc genhtml_branch_coverage=1 00:04:09.517 --rc genhtml_function_coverage=1 00:04:09.517 --rc genhtml_legend=1 00:04:09.517 --rc geninfo_all_blocks=1 00:04:09.517 --rc geninfo_unexecuted_blocks=1 00:04:09.517 00:04:09.517 ' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.517 --rc genhtml_branch_coverage=1 00:04:09.517 --rc genhtml_function_coverage=1 00:04:09.517 --rc genhtml_legend=1 00:04:09.517 --rc geninfo_all_blocks=1 00:04:09.517 --rc geninfo_unexecuted_blocks=1 00:04:09.517 00:04:09.517 ' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.517 --rc genhtml_branch_coverage=1 00:04:09.517 --rc genhtml_function_coverage=1 00:04:09.517 --rc genhtml_legend=1 00:04:09.517 --rc geninfo_all_blocks=1 00:04:09.517 --rc geninfo_unexecuted_blocks=1 00:04:09.517 00:04:09.517 ' 00:04:09.517 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:09.517 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1240929 00:04:09.517 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1240929 00:04:09.517 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1240929 ']' 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.517 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.517 [2024-12-09 14:57:11.216768] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:09.518 [2024-12-09 14:57:11.216814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240929 ] 00:04:09.518 [2024-12-09 14:57:11.291403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.777 [2024-12-09 14:57:11.330925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.777 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.777 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:09.777 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:09.777 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:09.777 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.777 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.037 { 00:04:10.037 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.037 } 00:04:10.037 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.037 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.037 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:10.037 1 heaps totaling size 818.000000 MiB 00:04:10.037 size: 818.000000 MiB heap id: 0 00:04:10.037 end heaps---------- 00:04:10.037 9 mempools totaling size 603.782043 MiB 00:04:10.037 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.037 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.037 size: 100.555481 MiB name: bdev_io_1240929 00:04:10.037 size: 50.003479 MiB name: msgpool_1240929 00:04:10.037 size: 36.509338 MiB name: fsdev_io_1240929 00:04:10.037 size: 21.763794 MiB name: PDU_Pool 00:04:10.037 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.037 size: 4.133484 MiB name: evtpool_1240929 00:04:10.037 size: 0.026123 MiB name: Session_Pool 00:04:10.037 end mempools------- 00:04:10.037 6 memzones totaling size 4.142822 MiB 00:04:10.037 size: 1.000366 MiB name: RG_ring_0_1240929 00:04:10.037 size: 1.000366 MiB name: RG_ring_1_1240929 00:04:10.037 size: 1.000366 MiB name: RG_ring_4_1240929 00:04:10.037 size: 1.000366 MiB name: RG_ring_5_1240929 00:04:10.037 size: 0.125366 MiB name: RG_ring_2_1240929 00:04:10.037 size: 0.015991 MiB name: RG_ring_3_1240929 00:04:10.037 end memzones------- 00:04:10.037 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.037 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:10.037 list of free elements. size: 10.852478 MiB 00:04:10.037 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:10.037 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:10.037 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:10.037 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:10.037 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:10.037 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:10.037 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:10.037 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:10.037 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:10.037 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:10.037 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:10.037 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:10.037 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:10.037 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:10.037 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:10.037 list of standard malloc elements. size: 199.218628 MiB 00:04:10.037 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:10.037 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:10.037 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.037 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:10.037 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:10.037 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.037 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:10.037 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.037 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:10.037 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:10.037 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:10.037 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:10.037 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:10.037 list of memzone associated elements. size: 607.928894 MiB 00:04:10.037 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:10.037 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.037 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:10.037 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.037 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:10.037 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1240929_0 00:04:10.037 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:10.037 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1240929_0 00:04:10.037 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:10.037 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1240929_0 00:04:10.037 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:10.037 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.037 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:10.037 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.037 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:10.037 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1240929_0 00:04:10.037 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:10.037 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1240929 00:04:10.037 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.037 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1240929 00:04:10.037 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:10.037 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.037 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:10.037 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.037 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:10.038 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.038 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:10.038 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.038 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:10.038 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1240929 00:04:10.038 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:10.038 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1240929 00:04:10.038 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:10.038 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1240929 00:04:10.038 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:10.038 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1240929 00:04:10.038 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:10.038 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1240929 00:04:10.038 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:10.038 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1240929 00:04:10.038 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:10.038 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.038 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:10.038 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.038 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:10.038 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.038 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:10.038 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1240929 00:04:10.038 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:10.038 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1240929 00:04:10.038 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:10.038 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.038 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:10.038 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.038 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:10.038 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1240929 00:04:10.038 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:10.038 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.038 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:10.038 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1240929 00:04:10.038 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:10.038 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1240929 00:04:10.038 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:10.038 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1240929 00:04:10.038 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:10.038 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.038 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.038 14:57:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1240929 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1240929 ']' 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1240929 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1240929 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1240929' 00:04:10.038 killing process with pid 1240929 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1240929 00:04:10.038 14:57:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1240929 00:04:10.298 00:04:10.298 real 0m1.043s 00:04:10.298 user 0m0.964s 00:04:10.298 sys 0m0.425s 00:04:10.298 14:57:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.298 14:57:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.298 ************************************ 00:04:10.298 END TEST dpdk_mem_utility 00:04:10.298 ************************************ 00:04:10.298 14:57:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.298 14:57:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.298 14:57:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.298 14:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:10.558 ************************************ 00:04:10.558 START TEST event 00:04:10.558 ************************************ 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.558 * Looking for test storage... 00:04:10.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.558 14:57:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.558 14:57:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.558 14:57:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.558 14:57:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.558 14:57:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.558 14:57:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.558 14:57:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.558 14:57:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.558 14:57:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.558 14:57:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.558 14:57:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.558 14:57:12 event -- scripts/common.sh@344 -- # case "$op" in 00:04:10.558 14:57:12 event -- scripts/common.sh@345 -- # : 1 00:04:10.558 14:57:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.558 14:57:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.558 14:57:12 event -- scripts/common.sh@365 -- # decimal 1 00:04:10.558 14:57:12 event -- scripts/common.sh@353 -- # local d=1 00:04:10.558 14:57:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.558 14:57:12 event -- scripts/common.sh@355 -- # echo 1 00:04:10.558 14:57:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.558 14:57:12 event -- scripts/common.sh@366 -- # decimal 2 00:04:10.558 14:57:12 event -- scripts/common.sh@353 -- # local d=2 00:04:10.558 14:57:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.558 14:57:12 event -- scripts/common.sh@355 -- # echo 2 00:04:10.558 14:57:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.558 14:57:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.558 14:57:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.558 14:57:12 event -- scripts/common.sh@368 -- # return 0 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.558 --rc genhtml_branch_coverage=1 00:04:10.558 --rc genhtml_function_coverage=1 00:04:10.558 --rc genhtml_legend=1 00:04:10.558 --rc geninfo_all_blocks=1 00:04:10.558 --rc geninfo_unexecuted_blocks=1 00:04:10.558 00:04:10.558 ' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.558 --rc genhtml_branch_coverage=1 00:04:10.558 --rc genhtml_function_coverage=1 00:04:10.558 --rc genhtml_legend=1 00:04:10.558 --rc geninfo_all_blocks=1 00:04:10.558 --rc geninfo_unexecuted_blocks=1 00:04:10.558 00:04:10.558 ' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.558 --rc genhtml_branch_coverage=1 00:04:10.558 --rc genhtml_function_coverage=1 00:04:10.558 --rc genhtml_legend=1 00:04:10.558 --rc geninfo_all_blocks=1 00:04:10.558 --rc geninfo_unexecuted_blocks=1 00:04:10.558 00:04:10.558 ' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.558 --rc genhtml_branch_coverage=1 00:04:10.558 --rc genhtml_function_coverage=1 00:04:10.558 --rc genhtml_legend=1 00:04:10.558 --rc geninfo_all_blocks=1 00:04:10.558 --rc geninfo_unexecuted_blocks=1 00:04:10.558 00:04:10.558 ' 00:04:10.558 14:57:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:10.558 14:57:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:10.558 14:57:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:10.558 14:57:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.558 14:57:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.558 ************************************ 00:04:10.558 START TEST event_perf 00:04:10.558 ************************************ 00:04:10.558 14:57:12 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.558 Running I/O for 1 seconds...[2024-12-09 14:57:12.336599] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:10.558 [2024-12-09 14:57:12.336667] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241217 ] 00:04:10.817 [2024-12-09 14:57:12.414275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.817 [2024-12-09 14:57:12.456043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.817 [2024-12-09 14:57:12.456153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.817 [2024-12-09 14:57:12.456260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.817 [2024-12-09 14:57:12.456261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.754 Running I/O for 1 seconds... 00:04:11.754 lcore 0: 207699 00:04:11.754 lcore 1: 207697 00:04:11.754 lcore 2: 207697 00:04:11.754 lcore 3: 207698 00:04:11.754 done. 00:04:11.754 00:04:11.754 real 0m1.179s 00:04:11.754 user 0m4.097s 00:04:11.754 sys 0m0.078s 00:04:11.754 14:57:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.754 14:57:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.754 ************************************ 00:04:11.754 END TEST event_perf 00:04:11.754 ************************************ 00:04:11.754 14:57:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.754 14:57:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:11.754 14:57:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.754 14:57:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.014 ************************************ 00:04:12.014 START TEST event_reactor 00:04:12.014 ************************************ 00:04:12.014 14:57:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.014 [2024-12-09 14:57:13.585743] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:12.014 [2024-12-09 14:57:13.585806] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241466 ] 00:04:12.014 [2024-12-09 14:57:13.663921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.014 [2024-12-09 14:57:13.703909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.952 test_start 00:04:12.952 oneshot 00:04:12.952 tick 100 00:04:12.952 tick 100 00:04:12.952 tick 250 00:04:12.952 tick 100 00:04:12.952 tick 100 00:04:12.952 tick 250 00:04:12.952 tick 100 00:04:12.952 tick 500 00:04:12.952 tick 100 00:04:12.952 tick 100 00:04:12.952 tick 250 00:04:12.952 tick 100 00:04:12.952 tick 100 00:04:12.952 test_end 00:04:12.952 00:04:12.952 real 0m1.175s 00:04:12.952 user 0m1.092s 00:04:12.952 sys 0m0.078s 00:04:12.952 14:57:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.952 14:57:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:12.952 ************************************ 00:04:12.952 END TEST event_reactor 00:04:12.952 ************************************ 00:04:13.211 14:57:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.211 14:57:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:13.211 14:57:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.211 14:57:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.211 ************************************ 00:04:13.211 START TEST event_reactor_perf 00:04:13.211 ************************************ 00:04:13.211 14:57:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.211 [2024-12-09 14:57:14.831031] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:13.211 [2024-12-09 14:57:14.831100] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241709 ] 00:04:13.211 [2024-12-09 14:57:14.909646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.211 [2024-12-09 14:57:14.948191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.590 test_start 00:04:14.590 test_end 00:04:14.590 Performance: 519891 events per second 00:04:14.590 00:04:14.590 real 0m1.174s 00:04:14.590 user 0m1.101s 00:04:14.590 sys 0m0.069s 00:04:14.590 14:57:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.590 14:57:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.590 ************************************ 00:04:14.590 END TEST event_reactor_perf 00:04:14.590 ************************************ 00:04:14.590 14:57:16 event -- event/event.sh@49 -- # uname -s 00:04:14.590 14:57:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.590 14:57:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.590 14:57:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.590 14:57:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.590 14:57:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.590 ************************************ 00:04:14.590 START TEST event_scheduler 00:04:14.590 ************************************ 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.590 * Looking for test storage... 00:04:14.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.590 14:57:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.590 --rc genhtml_branch_coverage=1 00:04:14.590 --rc genhtml_function_coverage=1 00:04:14.590 --rc genhtml_legend=1 00:04:14.590 --rc geninfo_all_blocks=1 00:04:14.590 --rc geninfo_unexecuted_blocks=1 00:04:14.590 00:04:14.590 ' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.590 --rc genhtml_branch_coverage=1 00:04:14.590 --rc genhtml_function_coverage=1 00:04:14.590 --rc genhtml_legend=1 00:04:14.590 --rc geninfo_all_blocks=1 00:04:14.590 --rc geninfo_unexecuted_blocks=1 00:04:14.590 00:04:14.590 ' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.590 --rc genhtml_branch_coverage=1 00:04:14.590 --rc genhtml_function_coverage=1 00:04:14.590 --rc genhtml_legend=1 00:04:14.590 --rc geninfo_all_blocks=1 00:04:14.590 --rc geninfo_unexecuted_blocks=1 00:04:14.590 00:04:14.590 ' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.590 --rc genhtml_branch_coverage=1 00:04:14.590 --rc genhtml_function_coverage=1 00:04:14.590 --rc genhtml_legend=1 00:04:14.590 --rc geninfo_all_blocks=1 00:04:14.590 --rc geninfo_unexecuted_blocks=1 00:04:14.590 00:04:14.590 ' 00:04:14.590 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:14.590 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1241994 00:04:14.590 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:14.590 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.590 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1241994 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1241994 ']' 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.590 14:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.590 [2024-12-09 14:57:16.276635] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:14.590 [2024-12-09 14:57:16.276682] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241994 ] 00:04:14.590 [2024-12-09 14:57:16.348795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.850 [2024-12-09 14:57:16.397238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.850 [2024-12-09 14:57:16.397272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.850 [2024-12-09 14:57:16.397379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.850 [2024-12-09 14:57:16.397379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:14.850 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 [2024-12-09 14:57:16.453983] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:14.850 [2024-12-09 14:57:16.454002] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:14.850 [2024-12-09 14:57:16.454011] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:14.850 [2024-12-09 14:57:16.454017] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:14.850 [2024-12-09 14:57:16.454022] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 [2024-12-09 14:57:16.527902] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 ************************************ 00:04:14.850 START TEST scheduler_create_thread 00:04:14.850 ************************************ 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 2 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 3 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 4 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 5 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 6 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 7 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.850 8 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.850 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.109 9 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.109 10 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.109 14:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.045 14:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.045 14:57:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:16.045 14:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.045 14:57:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.422 14:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.422 14:57:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:17.422 14:57:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:17.422 14:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.422 14:57:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.359 14:57:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.359 00:04:18.359 real 0m3.382s 00:04:18.359 user 0m0.026s 00:04:18.359 sys 0m0.004s 00:04:18.359 14:57:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.359 14:57:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.359 ************************************ 00:04:18.359 END TEST scheduler_create_thread 00:04:18.359 ************************************ 00:04:18.359 14:57:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:18.359 14:57:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1241994 00:04:18.359 14:57:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1241994 ']' 00:04:18.359 14:57:19 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1241994 00:04:18.359 14:57:19 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:18.359 14:57:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.359 14:57:19 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241994 00:04:18.359 14:57:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:18.359 14:57:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:18.359 14:57:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241994' 00:04:18.359 killing process with pid 1241994 00:04:18.359 14:57:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1241994 00:04:18.359 14:57:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1241994 00:04:18.648 [2024-12-09 14:57:20.323785] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.973 00:04:18.973 real 0m4.475s 00:04:18.973 user 0m7.881s 00:04:18.973 sys 0m0.368s 00:04:18.973 14:57:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.973 14:57:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.973 ************************************ 00:04:18.973 END TEST event_scheduler 00:04:18.973 ************************************ 00:04:18.973 14:57:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.973 14:57:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.973 14:57:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.973 14:57:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.973 14:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.973 ************************************ 00:04:18.973 START TEST app_repeat 00:04:18.973 ************************************ 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1242729 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1242729' 00:04:18.973 Process app_repeat pid: 1242729 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.973 spdk_app_start Round 0 00:04:18.973 14:57:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1242729 /var/tmp/spdk-nbd.sock 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1242729 ']' 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.973 14:57:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.973 [2024-12-09 14:57:20.645210] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:18.973 [2024-12-09 14:57:20.645277] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242729 ] 00:04:18.973 [2024-12-09 14:57:20.723013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.280 [2024-12-09 14:57:20.763933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.280 [2024-12-09 14:57:20.763934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.280 14:57:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.280 14:57:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:19.280 14:57:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.280 Malloc0 00:04:19.539 14:57:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.539 Malloc1 00:04:19.539 14:57:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.539 14:57:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.797 /dev/nbd0 00:04:19.797 14:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.797 14:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.798 1+0 records in 00:04:19.798 1+0 records out 00:04:19.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022446 s, 18.2 MB/s 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:19.798 14:57:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:19.798 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.798 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.798 14:57:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.056 /dev/nbd1 00:04:20.056 14:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.056 14:57:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.056 1+0 records in 00:04:20.056 1+0 records out 00:04:20.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193784 s, 21.1 MB/s 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.056 14:57:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:20.057 14:57:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.057 14:57:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:20.057 14:57:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:20.057 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.057 14:57:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.057 14:57:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.057 14:57:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.057 14:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.314 { 00:04:20.314 "nbd_device": "/dev/nbd0", 00:04:20.314 "bdev_name": "Malloc0" 00:04:20.314 }, 00:04:20.314 { 00:04:20.314 "nbd_device": "/dev/nbd1", 00:04:20.314 "bdev_name": "Malloc1" 00:04:20.314 } 00:04:20.314 ]' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.314 { 00:04:20.314 "nbd_device": "/dev/nbd0", 00:04:20.314 "bdev_name": "Malloc0" 00:04:20.314 }, 00:04:20.314 { 00:04:20.314 "nbd_device": "/dev/nbd1", 00:04:20.314 "bdev_name": "Malloc1" 00:04:20.314 } 00:04:20.314 ]' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.314 /dev/nbd1' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.314 /dev/nbd1' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.314 256+0 records in 00:04:20.314 256+0 records out 00:04:20.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00327609 s, 320 MB/s 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.314 256+0 records in 00:04:20.314 256+0 records out 00:04:20.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140259 s, 74.8 MB/s 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.314 256+0 records in 00:04:20.314 256+0 records out 00:04:20.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146107 s, 71.8 MB/s 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.314 14:57:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.571 14:57:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.829 14:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:21.088 14:57:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:21.088 14:57:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.347 14:57:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.347 [2024-12-09 14:57:23.141869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.606 [2024-12-09 14:57:23.177871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.606 [2024-12-09 14:57:23.177871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.606 [2024-12-09 14:57:23.218206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.606 [2024-12-09 14:57:23.218259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.885 spdk_app_start Round 1 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1242729 /var/tmp/spdk-nbd.sock 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1242729 ']' 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.885 14:57:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.885 Malloc0 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.885 Malloc1 00:04:24.885 14:57:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.885 14:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.143 /dev/nbd0 00:04:25.143 14:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.143 14:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.143 1+0 records in 00:04:25.143 1+0 records out 00:04:25.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.0275e-05 s, 45.4 MB/s 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.143 14:57:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.143 14:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.143 14:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.143 14:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.401 /dev/nbd1 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.401 1+0 records in 00:04:25.401 1+0 records out 00:04:25.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000117218 s, 34.9 MB/s 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:25.401 14:57:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.401 14:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.658 { 00:04:25.658 "nbd_device": "/dev/nbd0", 00:04:25.658 "bdev_name": "Malloc0" 00:04:25.658 }, 00:04:25.658 { 00:04:25.658 "nbd_device": "/dev/nbd1", 00:04:25.658 "bdev_name": "Malloc1" 00:04:25.658 } 00:04:25.658 ]' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.658 { 00:04:25.658 "nbd_device": "/dev/nbd0", 00:04:25.658 "bdev_name": "Malloc0" 00:04:25.658 }, 00:04:25.658 { 00:04:25.658 "nbd_device": "/dev/nbd1", 00:04:25.658 "bdev_name": "Malloc1" 00:04:25.658 } 00:04:25.658 ]' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.658 /dev/nbd1' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.658 /dev/nbd1' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.658 14:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.659 256+0 records in 00:04:25.659 256+0 records out 00:04:25.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100046 s, 105 MB/s 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.659 256+0 records in 00:04:25.659 256+0 records out 00:04:25.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134972 s, 77.7 MB/s 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.659 256+0 records in 00:04:25.659 256+0 records out 00:04:25.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149411 s, 70.2 MB/s 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.659 14:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.916 14:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.174 14:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.432 14:57:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.432 14:57:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.690 14:57:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.690 [2024-12-09 14:57:28.410878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.690 [2024-12-09 14:57:28.446412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.690 [2024-12-09 14:57:28.446412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.948 [2024-12-09 14:57:28.487678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.948 [2024-12-09 14:57:28.487713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:30.233 spdk_app_start Round 2 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1242729 /var/tmp/spdk-nbd.sock 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1242729 ']' 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.233 14:57:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.233 Malloc0 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.233 Malloc1 00:04:30.233 14:57:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.233 14:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.234 14:57:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.492 /dev/nbd0 00:04:30.492 14:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.492 14:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:30.492 14:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.493 1+0 records in 00:04:30.493 1+0 records out 00:04:30.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187857 s, 21.8 MB/s 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.493 14:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.493 14:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.493 14:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.493 14:57:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.751 /dev/nbd1 00:04:30.751 14:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.751 14:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.751 14:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:30.751 14:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:30.751 14:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:30.751 14:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:30.751 14:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.752 1+0 records in 00:04:30.752 1+0 records out 00:04:30.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196737 s, 20.8 MB/s 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:30.752 14:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:30.752 14:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.752 14:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.752 14:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.752 14:57:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.752 14:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.011 { 00:04:31.011 "nbd_device": "/dev/nbd0", 00:04:31.011 "bdev_name": "Malloc0" 00:04:31.011 }, 00:04:31.011 { 00:04:31.011 "nbd_device": "/dev/nbd1", 00:04:31.011 "bdev_name": "Malloc1" 00:04:31.011 } 00:04:31.011 ]' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.011 { 00:04:31.011 "nbd_device": "/dev/nbd0", 00:04:31.011 "bdev_name": "Malloc0" 00:04:31.011 }, 00:04:31.011 { 00:04:31.011 "nbd_device": "/dev/nbd1", 00:04:31.011 "bdev_name": "Malloc1" 00:04:31.011 } 00:04:31.011 ]' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.011 /dev/nbd1' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.011 /dev/nbd1' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.011 256+0 records in 00:04:31.011 256+0 records out 00:04:31.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996891 s, 105 MB/s 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.011 256+0 records in 00:04:31.011 256+0 records out 00:04:31.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143188 s, 73.2 MB/s 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.011 256+0 records in 00:04:31.011 256+0 records out 00:04:31.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146114 s, 71.8 MB/s 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.011 14:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.270 14:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.528 14:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.787 14:57:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.787 14:57:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.045 14:57:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.045 [2024-12-09 14:57:33.738178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.045 [2024-12-09 14:57:33.773531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.045 [2024-12-09 14:57:33.773531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.046 [2024-12-09 14:57:33.814018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.046 [2024-12-09 14:57:33.814056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.329 14:57:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1242729 /var/tmp/spdk-nbd.sock 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1242729 ']' 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:35.329 14:57:36 event.app_repeat -- event/event.sh@39 -- # killprocess 1242729 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1242729 ']' 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1242729 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242729 00:04:35.329 14:57:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242729' 00:04:35.330 killing process with pid 1242729 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1242729 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1242729 00:04:35.330 spdk_app_start is called in Round 0. 00:04:35.330 Shutdown signal received, stop current app iteration 00:04:35.330 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:04:35.330 spdk_app_start is called in Round 1. 00:04:35.330 Shutdown signal received, stop current app iteration 00:04:35.330 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:04:35.330 spdk_app_start is called in Round 2. 00:04:35.330 Shutdown signal received, stop current app iteration 00:04:35.330 Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 reinitialization... 00:04:35.330 spdk_app_start is called in Round 3. 00:04:35.330 Shutdown signal received, stop current app iteration 00:04:35.330 14:57:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.330 14:57:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.330 00:04:35.330 real 0m16.385s 00:04:35.330 user 0m36.132s 00:04:35.330 sys 0m2.448s 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.330 14:57:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.330 ************************************ 00:04:35.330 END TEST app_repeat 00:04:35.330 ************************************ 00:04:35.330 14:57:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.330 14:57:37 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.330 14:57:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.330 14:57:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.330 14:57:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.330 ************************************ 00:04:35.330 START TEST cpu_locks 00:04:35.330 ************************************ 00:04:35.330 14:57:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.589 * Looking for test storage... 00:04:35.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.589 14:57:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.589 --rc genhtml_branch_coverage=1 00:04:35.589 --rc genhtml_function_coverage=1 00:04:35.589 --rc genhtml_legend=1 00:04:35.589 --rc geninfo_all_blocks=1 00:04:35.589 --rc geninfo_unexecuted_blocks=1 00:04:35.589 00:04:35.589 ' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.589 --rc genhtml_branch_coverage=1 00:04:35.589 --rc genhtml_function_coverage=1 00:04:35.589 --rc genhtml_legend=1 00:04:35.589 --rc geninfo_all_blocks=1 00:04:35.589 --rc geninfo_unexecuted_blocks=1 00:04:35.589 00:04:35.589 ' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.589 --rc genhtml_branch_coverage=1 00:04:35.589 --rc genhtml_function_coverage=1 00:04:35.589 --rc genhtml_legend=1 00:04:35.589 --rc geninfo_all_blocks=1 00:04:35.589 --rc geninfo_unexecuted_blocks=1 00:04:35.589 00:04:35.589 ' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.589 --rc genhtml_branch_coverage=1 00:04:35.589 --rc genhtml_function_coverage=1 00:04:35.589 --rc genhtml_legend=1 00:04:35.589 --rc geninfo_all_blocks=1 00:04:35.589 --rc geninfo_unexecuted_blocks=1 00:04:35.589 00:04:35.589 ' 00:04:35.589 14:57:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.589 14:57:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.589 14:57:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.589 14:57:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.589 14:57:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.589 ************************************ 00:04:35.589 START TEST default_locks 00:04:35.589 ************************************ 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1245698 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1245698 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1245698 ']' 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.589 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.589 [2024-12-09 14:57:37.330346] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:35.590 [2024-12-09 14:57:37.330392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245698 ] 00:04:35.848 [2024-12-09 14:57:37.405569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.848 [2024-12-09 14:57:37.446063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.106 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.106 14:57:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:36.106 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1245698 00:04:36.106 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1245698 00:04:36.106 14:57:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.365 lslocks: write error 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1245698 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1245698 ']' 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1245698 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.365 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245698 00:04:36.623 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.623 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.623 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245698' 00:04:36.623 killing process with pid 1245698 00:04:36.623 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1245698 00:04:36.623 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1245698 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1245698 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1245698 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1245698 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1245698 ']' 00:04:36.881 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1245698) - No such process 00:04:36.882 ERROR: process (pid: 1245698) is no longer running 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.882 00:04:36.882 real 0m1.190s 00:04:36.882 user 0m1.137s 00:04:36.882 sys 0m0.557s 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.882 14:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.882 ************************************ 00:04:36.882 END TEST default_locks 00:04:36.882 ************************************ 00:04:36.882 14:57:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:36.882 14:57:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.882 14:57:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.882 14:57:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.882 ************************************ 00:04:36.882 START TEST default_locks_via_rpc 00:04:36.882 ************************************ 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1245947 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1245947 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1245947 ']' 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.882 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.882 [2024-12-09 14:57:38.592432] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:36.882 [2024-12-09 14:57:38.592479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245947 ] 00:04:36.882 [2024-12-09 14:57:38.666006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.140 [2024-12-09 14:57:38.707357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.140 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.398 14:57:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.398 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1245947 00:04:37.398 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1245947 00:04:37.398 14:57:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1245947 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1245947 ']' 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1245947 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.398 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1245947 00:04:37.657 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.657 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.657 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1245947' 00:04:37.657 killing process with pid 1245947 00:04:37.657 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1245947 00:04:37.657 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1245947 00:04:37.915 00:04:37.915 real 0m0.963s 00:04:37.915 user 0m0.927s 00:04:37.915 sys 0m0.428s 00:04:37.915 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.915 14:57:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.915 ************************************ 00:04:37.915 END TEST default_locks_via_rpc 00:04:37.915 ************************************ 00:04:37.915 14:57:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.915 14:57:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.915 14:57:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.915 14:57:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.915 ************************************ 00:04:37.915 START TEST non_locking_app_on_locked_coremask 00:04:37.915 ************************************ 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1246198 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1246198 /var/tmp/spdk.sock 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1246198 ']' 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.915 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.915 [2024-12-09 14:57:39.624351] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:37.915 [2024-12-09 14:57:39.624393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246198 ] 00:04:37.915 [2024-12-09 14:57:39.698378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.174 [2024-12-09 14:57:39.740355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1246214 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1246214 /var/tmp/spdk2.sock 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1246214 ']' 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.174 14:57:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.432 [2024-12-09 14:57:39.999155] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:38.432 [2024-12-09 14:57:39.999204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246214 ] 00:04:38.432 [2024-12-09 14:57:40.094628] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:38.432 [2024-12-09 14:57:40.094657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.432 [2024-12-09 14:57:40.174891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.367 14:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.367 14:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:39.367 14:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1246198 00:04:39.367 14:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1246198 00:04:39.367 14:57:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.625 lslocks: write error 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1246198 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1246198 ']' 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1246198 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246198 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246198' 00:04:39.625 killing process with pid 1246198 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1246198 00:04:39.625 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1246198 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1246214 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1246214 ']' 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1246214 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.192 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246214 00:04:40.450 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.450 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.450 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246214' 00:04:40.450 killing process with pid 1246214 00:04:40.450 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1246214 00:04:40.450 14:57:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1246214 00:04:40.708 00:04:40.708 real 0m2.723s 00:04:40.708 user 0m2.891s 00:04:40.708 sys 0m0.873s 00:04:40.708 14:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.708 14:57:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.708 ************************************ 00:04:40.708 END TEST non_locking_app_on_locked_coremask 00:04:40.708 ************************************ 00:04:40.708 14:57:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:40.708 14:57:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.708 14:57:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.708 14:57:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.708 ************************************ 00:04:40.708 START TEST locking_app_on_unlocked_coremask 00:04:40.708 ************************************ 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1246697 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1246697 /var/tmp/spdk.sock 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1246697 ']' 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.708 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.708 [2024-12-09 14:57:42.417546] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:40.708 [2024-12-09 14:57:42.417590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246697 ] 00:04:40.708 [2024-12-09 14:57:42.491692] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.708 [2024-12-09 14:57:42.491717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.966 [2024-12-09 14:57:42.532785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.966 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.966 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1246704 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1246704 /var/tmp/spdk2.sock 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1246704 ']' 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.967 14:57:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.225 [2024-12-09 14:57:42.777150] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:41.225 [2024-12-09 14:57:42.777196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246704 ] 00:04:41.225 [2024-12-09 14:57:42.862978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.225 [2024-12-09 14:57:42.949708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.160 14:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.160 14:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:42.160 14:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1246704 00:04:42.160 14:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1246704 00:04:42.160 14:57:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.418 lslocks: write error 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1246697 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1246697 ']' 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1246697 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.418 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246697 00:04:42.419 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.419 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.419 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246697' 00:04:42.419 killing process with pid 1246697 00:04:42.419 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1246697 00:04:42.419 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1246697 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1246704 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1246704 ']' 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1246704 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246704 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246704' 00:04:42.986 killing process with pid 1246704 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1246704 00:04:42.986 14:57:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1246704 00:04:43.245 00:04:43.245 real 0m2.653s 00:04:43.245 user 0m2.783s 00:04:43.245 sys 0m0.858s 00:04:43.245 14:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.245 14:57:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.245 ************************************ 00:04:43.245 END TEST locking_app_on_unlocked_coremask 00:04:43.246 ************************************ 00:04:43.513 14:57:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.513 14:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.513 14:57:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.513 14:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.513 ************************************ 00:04:43.513 START TEST locking_app_on_locked_coremask 00:04:43.513 ************************************ 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1247188 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1247188 /var/tmp/spdk.sock 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1247188 ']' 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.513 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.513 [2024-12-09 14:57:45.137562] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:43.514 [2024-12-09 14:57:45.137603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247188 ] 00:04:43.514 [2024-12-09 14:57:45.209258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.514 [2024-12-09 14:57:45.244879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1247195 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1247195 /var/tmp/spdk2.sock 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1247195 /var/tmp/spdk2.sock 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1247195 /var/tmp/spdk2.sock 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1247195 ']' 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.772 14:57:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.772 [2024-12-09 14:57:45.522141] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:43.772 [2024-12-09 14:57:45.522190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247195 ] 00:04:44.031 [2024-12-09 14:57:45.610200] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1247188 has claimed it. 00:04:44.031 [2024-12-09 14:57:45.614243] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1247195) - No such process 00:04:44.597 ERROR: process (pid: 1247195) is no longer running 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1247188 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1247188 00:04:44.597 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.856 lslocks: write error 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1247188 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1247188 ']' 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1247188 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247188 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247188' 00:04:44.856 killing process with pid 1247188 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1247188 00:04:44.856 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1247188 00:04:45.115 00:04:45.115 real 0m1.744s 00:04:45.115 user 0m1.875s 00:04:45.115 sys 0m0.574s 00:04:45.115 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.115 14:57:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.115 ************************************ 00:04:45.115 END TEST locking_app_on_locked_coremask 00:04:45.115 ************************************ 00:04:45.115 14:57:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:45.115 14:57:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.115 14:57:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.115 14:57:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.115 ************************************ 00:04:45.115 START TEST locking_overlapped_coremask 00:04:45.115 ************************************ 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1247477 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1247477 /var/tmp/spdk.sock 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1247477 ']' 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.115 14:57:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.374 [2024-12-09 14:57:46.950799] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:45.374 [2024-12-09 14:57:46.950842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247477 ] 00:04:45.374 [2024-12-09 14:57:47.026476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.374 [2024-12-09 14:57:47.069302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.374 [2024-12-09 14:57:47.069407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.374 [2024-12-09 14:57:47.069408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1247673 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1247673 /var/tmp/spdk2.sock 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1247673 /var/tmp/spdk2.sock 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1247673 /var/tmp/spdk2.sock 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1247673 ']' 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.632 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.632 [2024-12-09 14:57:47.342026] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:45.632 [2024-12-09 14:57:47.342072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247673 ] 00:04:45.891 [2024-12-09 14:57:47.433298] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1247477 has claimed it. 00:04:45.891 [2024-12-09 14:57:47.433336] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:46.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1247673) - No such process 00:04:46.458 ERROR: process (pid: 1247673) is no longer running 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1247477 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1247477 ']' 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1247477 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.458 14:57:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247477 00:04:46.458 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.458 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.458 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247477' 00:04:46.458 killing process with pid 1247477 00:04:46.458 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1247477 00:04:46.458 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1247477 00:04:46.716 00:04:46.716 real 0m1.432s 00:04:46.716 user 0m3.953s 00:04:46.716 sys 0m0.394s 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.716 ************************************ 00:04:46.716 END TEST locking_overlapped_coremask 00:04:46.716 ************************************ 00:04:46.716 14:57:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.716 14:57:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.716 14:57:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.716 14:57:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.716 ************************************ 00:04:46.716 START TEST locking_overlapped_coremask_via_rpc 00:04:46.716 ************************************ 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1247825 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1247825 /var/tmp/spdk.sock 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1247825 ']' 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.716 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.717 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.717 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.717 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.717 [2024-12-09 14:57:48.444133] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:46.717 [2024-12-09 14:57:48.444174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247825 ] 00:04:46.975 [2024-12-09 14:57:48.515934] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.975 [2024-12-09 14:57:48.515963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.975 [2024-12-09 14:57:48.556759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.975 [2024-12-09 14:57:48.556869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.975 [2024-12-09 14:57:48.556869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1247934 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1247934 /var/tmp/spdk2.sock 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1247934 ']' 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.234 14:57:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.234 [2024-12-09 14:57:48.826939] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:47.234 [2024-12-09 14:57:48.826986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247934 ] 00:04:47.234 [2024-12-09 14:57:48.916144] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.234 [2024-12-09 14:57:48.916175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.234 [2024-12-09 14:57:48.997964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.234 [2024-12-09 14:57:49.001259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.234 [2024-12-09 14:57:49.001260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.169 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.169 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.169 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.169 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.169 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 [2024-12-09 14:57:49.665286] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1247825 has claimed it. 00:04:48.170 request: 00:04:48.170 { 00:04:48.170 "method": "framework_enable_cpumask_locks", 00:04:48.170 "req_id": 1 00:04:48.170 } 00:04:48.170 Got JSON-RPC error response 00:04:48.170 response: 00:04:48.170 { 00:04:48.170 "code": -32603, 00:04:48.170 "message": "Failed to claim CPU core: 2" 00:04:48.170 } 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1247825 /var/tmp/spdk.sock 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1247825 ']' 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1247934 /var/tmp/spdk2.sock 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1247934 ']' 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.170 14:57:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.429 00:04:48.429 real 0m1.718s 00:04:48.429 user 0m0.844s 00:04:48.429 sys 0m0.132s 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.429 14:57:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.429 ************************************ 00:04:48.429 END TEST locking_overlapped_coremask_via_rpc 00:04:48.429 ************************************ 00:04:48.429 14:57:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.429 14:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1247825 ]] 00:04:48.429 14:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1247825 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1247825 ']' 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1247825 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247825 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247825' 00:04:48.429 killing process with pid 1247825 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1247825 00:04:48.429 14:57:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1247825 00:04:48.996 14:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1247934 ]] 00:04:48.996 14:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1247934 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1247934 ']' 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1247934 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1247934 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1247934' 00:04:48.996 killing process with pid 1247934 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1247934 00:04:48.996 14:57:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1247934 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1247825 ]] 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1247825 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1247825 ']' 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1247825 00:04:49.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1247825) - No such process 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1247825 is not found' 00:04:49.256 Process with pid 1247825 is not found 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1247934 ]] 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1247934 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1247934 ']' 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1247934 00:04:49.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1247934) - No such process 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1247934 is not found' 00:04:49.256 Process with pid 1247934 is not found 00:04:49.256 14:57:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.256 00:04:49.256 real 0m13.816s 00:04:49.256 user 0m24.228s 00:04:49.256 sys 0m4.777s 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.256 14:57:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.256 ************************************ 00:04:49.256 END TEST cpu_locks 00:04:49.256 ************************************ 00:04:49.256 00:04:49.256 real 0m38.811s 00:04:49.256 user 1m14.802s 00:04:49.256 sys 0m8.191s 00:04:49.256 14:57:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.256 14:57:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.256 ************************************ 00:04:49.256 END TEST event 00:04:49.256 ************************************ 00:04:49.256 14:57:50 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.256 14:57:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.256 14:57:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.256 14:57:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.256 ************************************ 00:04:49.256 START TEST thread 00:04:49.256 ************************************ 00:04:49.256 14:57:50 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:49.515 * Looking for test storage... 00:04:49.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.515 14:57:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.515 14:57:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.515 14:57:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.515 14:57:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.515 14:57:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.515 14:57:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.515 14:57:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.515 14:57:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.515 14:57:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.515 14:57:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.515 14:57:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.515 14:57:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:49.515 14:57:51 thread -- scripts/common.sh@345 -- # : 1 00:04:49.515 14:57:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.515 14:57:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.515 14:57:51 thread -- scripts/common.sh@365 -- # decimal 1 00:04:49.515 14:57:51 thread -- scripts/common.sh@353 -- # local d=1 00:04:49.515 14:57:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.515 14:57:51 thread -- scripts/common.sh@355 -- # echo 1 00:04:49.515 14:57:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.515 14:57:51 thread -- scripts/common.sh@366 -- # decimal 2 00:04:49.515 14:57:51 thread -- scripts/common.sh@353 -- # local d=2 00:04:49.515 14:57:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.515 14:57:51 thread -- scripts/common.sh@355 -- # echo 2 00:04:49.515 14:57:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.515 14:57:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.515 14:57:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.515 14:57:51 thread -- scripts/common.sh@368 -- # return 0 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.515 --rc genhtml_branch_coverage=1 00:04:49.515 --rc genhtml_function_coverage=1 00:04:49.515 --rc genhtml_legend=1 00:04:49.515 --rc geninfo_all_blocks=1 00:04:49.515 --rc geninfo_unexecuted_blocks=1 00:04:49.515 00:04:49.515 ' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.515 --rc genhtml_branch_coverage=1 00:04:49.515 --rc genhtml_function_coverage=1 00:04:49.515 --rc genhtml_legend=1 00:04:49.515 --rc geninfo_all_blocks=1 00:04:49.515 --rc geninfo_unexecuted_blocks=1 00:04:49.515 00:04:49.515 ' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.515 --rc genhtml_branch_coverage=1 00:04:49.515 --rc genhtml_function_coverage=1 00:04:49.515 --rc genhtml_legend=1 00:04:49.515 --rc geninfo_all_blocks=1 00:04:49.515 --rc geninfo_unexecuted_blocks=1 00:04:49.515 00:04:49.515 ' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.515 --rc genhtml_branch_coverage=1 00:04:49.515 --rc genhtml_function_coverage=1 00:04:49.515 --rc genhtml_legend=1 00:04:49.515 --rc geninfo_all_blocks=1 00:04:49.515 --rc geninfo_unexecuted_blocks=1 00:04:49.515 00:04:49.515 ' 00:04:49.515 14:57:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.515 14:57:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.515 ************************************ 00:04:49.515 START TEST thread_poller_perf 00:04:49.515 ************************************ 00:04:49.515 14:57:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.515 [2024-12-09 14:57:51.222932] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:49.515 [2024-12-09 14:57:51.222995] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248425 ] 00:04:49.515 [2024-12-09 14:57:51.303411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.774 [2024-12-09 14:57:51.343345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.774 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.710 [2024-12-09T13:57:52.505Z] ====================================== 00:04:50.710 [2024-12-09T13:57:52.505Z] busy:2106644206 (cyc) 00:04:50.710 [2024-12-09T13:57:52.505Z] total_run_count: 423000 00:04:50.710 [2024-12-09T13:57:52.505Z] tsc_hz: 2100000000 (cyc) 00:04:50.710 [2024-12-09T13:57:52.505Z] ====================================== 00:04:50.710 [2024-12-09T13:57:52.505Z] poller_cost: 4980 (cyc), 2371 (nsec) 00:04:50.710 00:04:50.710 real 0m1.184s 00:04:50.710 user 0m1.102s 00:04:50.710 sys 0m0.078s 00:04:50.710 14:57:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.710 14:57:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.710 ************************************ 00:04:50.710 END TEST thread_poller_perf 00:04:50.710 ************************************ 00:04:50.710 14:57:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.710 14:57:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:50.710 14:57:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.710 14:57:52 thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.710 ************************************ 00:04:50.710 START TEST thread_poller_perf 00:04:50.710 ************************************ 00:04:50.710 14:57:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.710 [2024-12-09 14:57:52.480524] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:50.710 [2024-12-09 14:57:52.480595] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248600 ] 00:04:50.969 [2024-12-09 14:57:52.561112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.969 [2024-12-09 14:57:52.603509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.969 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.905 [2024-12-09T13:57:53.700Z] ====================================== 00:04:51.905 [2024-12-09T13:57:53.700Z] busy:2101379622 (cyc) 00:04:51.905 [2024-12-09T13:57:53.700Z] total_run_count: 5244000 00:04:51.905 [2024-12-09T13:57:53.700Z] tsc_hz: 2100000000 (cyc) 00:04:51.905 [2024-12-09T13:57:53.700Z] ====================================== 00:04:51.905 [2024-12-09T13:57:53.700Z] poller_cost: 400 (cyc), 190 (nsec) 00:04:51.905 00:04:51.905 real 0m1.182s 00:04:51.905 user 0m1.098s 00:04:51.905 sys 0m0.079s 00:04:51.905 14:57:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.905 14:57:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.905 ************************************ 00:04:51.905 END TEST thread_poller_perf 00:04:51.905 ************************************ 00:04:51.905 14:57:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.905 00:04:51.905 real 0m2.690s 00:04:51.905 user 0m2.359s 00:04:51.905 sys 0m0.346s 00:04:51.905 14:57:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.905 14:57:53 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.905 ************************************ 00:04:51.905 END TEST thread 00:04:51.905 ************************************ 00:04:52.164 14:57:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:52.164 14:57:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.164 14:57:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.164 14:57:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.164 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.164 ************************************ 00:04:52.164 START TEST app_cmdline 00:04:52.164 ************************************ 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:52.164 * Looking for test storage... 00:04:52.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.164 14:57:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.164 --rc genhtml_branch_coverage=1 00:04:52.164 --rc genhtml_function_coverage=1 00:04:52.164 --rc genhtml_legend=1 00:04:52.164 --rc geninfo_all_blocks=1 00:04:52.164 --rc geninfo_unexecuted_blocks=1 00:04:52.164 00:04:52.164 ' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.164 --rc genhtml_branch_coverage=1 00:04:52.164 --rc genhtml_function_coverage=1 00:04:52.164 --rc genhtml_legend=1 00:04:52.164 --rc geninfo_all_blocks=1 00:04:52.164 --rc geninfo_unexecuted_blocks=1 00:04:52.164 00:04:52.164 ' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.164 --rc genhtml_branch_coverage=1 00:04:52.164 --rc genhtml_function_coverage=1 00:04:52.164 --rc genhtml_legend=1 00:04:52.164 --rc geninfo_all_blocks=1 00:04:52.164 --rc geninfo_unexecuted_blocks=1 00:04:52.164 00:04:52.164 ' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.164 --rc genhtml_branch_coverage=1 00:04:52.164 --rc genhtml_function_coverage=1 00:04:52.164 --rc genhtml_legend=1 00:04:52.164 --rc geninfo_all_blocks=1 00:04:52.164 --rc geninfo_unexecuted_blocks=1 00:04:52.164 00:04:52.164 ' 00:04:52.164 14:57:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:52.164 14:57:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1248941 00:04:52.164 14:57:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1248941 00:04:52.164 14:57:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1248941 ']' 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.164 14:57:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.423 [2024-12-09 14:57:53.979491] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:04:52.423 [2024-12-09 14:57:53.979541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248941 ] 00:04:52.423 [2024-12-09 14:57:54.042444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.423 [2024-12-09 14:57:54.083956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.682 14:57:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.682 14:57:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:52.682 { 00:04:52.682 "version": "SPDK v25.01-pre git sha1 3318278a6", 00:04:52.682 "fields": { 00:04:52.682 "major": 25, 00:04:52.682 "minor": 1, 00:04:52.682 "patch": 0, 00:04:52.682 "suffix": "-pre", 00:04:52.682 "commit": "3318278a6" 00:04:52.682 } 00:04:52.682 } 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:52.682 14:57:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.940 14:57:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.940 14:57:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:52.940 14:57:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:52.940 14:57:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:52.940 14:57:54 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:52.940 request: 00:04:52.940 { 00:04:52.940 "method": "env_dpdk_get_mem_stats", 00:04:52.940 "req_id": 1 00:04:52.940 } 00:04:52.940 Got JSON-RPC error response 00:04:52.940 response: 00:04:52.940 { 00:04:52.940 "code": -32601, 00:04:52.940 "message": "Method not found" 00:04:52.940 } 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.941 14:57:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1248941 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1248941 ']' 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1248941 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.941 14:57:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248941 00:04:53.199 14:57:54 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.199 14:57:54 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.199 14:57:54 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248941' 00:04:53.199 killing process with pid 1248941 00:04:53.199 14:57:54 app_cmdline -- common/autotest_common.sh@973 -- # kill 1248941 00:04:53.199 14:57:54 app_cmdline -- common/autotest_common.sh@978 -- # wait 1248941 00:04:53.459 00:04:53.459 real 0m1.302s 00:04:53.459 user 0m1.528s 00:04:53.459 sys 0m0.430s 00:04:53.459 14:57:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.459 14:57:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:53.459 ************************************ 00:04:53.459 END TEST app_cmdline 00:04:53.459 ************************************ 00:04:53.459 14:57:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.459 14:57:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.459 14:57:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.459 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.459 ************************************ 00:04:53.459 START TEST version 00:04:53.459 ************************************ 00:04:53.459 14:57:55 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:53.459 * Looking for test storage... 00:04:53.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:53.459 14:57:55 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.459 14:57:55 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.459 14:57:55 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.718 14:57:55 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.718 14:57:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.718 14:57:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.718 14:57:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.718 14:57:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.718 14:57:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.718 14:57:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.718 14:57:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.718 14:57:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.718 14:57:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.718 14:57:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.718 14:57:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.718 14:57:55 version -- scripts/common.sh@344 -- # case "$op" in 00:04:53.718 14:57:55 version -- scripts/common.sh@345 -- # : 1 00:04:53.718 14:57:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.718 14:57:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.718 14:57:55 version -- scripts/common.sh@365 -- # decimal 1 00:04:53.718 14:57:55 version -- scripts/common.sh@353 -- # local d=1 00:04:53.718 14:57:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.718 14:57:55 version -- scripts/common.sh@355 -- # echo 1 00:04:53.718 14:57:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.718 14:57:55 version -- scripts/common.sh@366 -- # decimal 2 00:04:53.718 14:57:55 version -- scripts/common.sh@353 -- # local d=2 00:04:53.718 14:57:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.718 14:57:55 version -- scripts/common.sh@355 -- # echo 2 00:04:53.718 14:57:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.719 14:57:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.719 14:57:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.719 14:57:55 version -- scripts/common.sh@368 -- # return 0 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.719 --rc genhtml_branch_coverage=1 00:04:53.719 --rc genhtml_function_coverage=1 00:04:53.719 --rc genhtml_legend=1 00:04:53.719 --rc geninfo_all_blocks=1 00:04:53.719 --rc geninfo_unexecuted_blocks=1 00:04:53.719 00:04:53.719 ' 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.719 --rc genhtml_branch_coverage=1 00:04:53.719 --rc genhtml_function_coverage=1 00:04:53.719 --rc genhtml_legend=1 00:04:53.719 --rc geninfo_all_blocks=1 00:04:53.719 --rc geninfo_unexecuted_blocks=1 00:04:53.719 00:04:53.719 ' 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.719 --rc genhtml_branch_coverage=1 00:04:53.719 --rc genhtml_function_coverage=1 00:04:53.719 --rc genhtml_legend=1 00:04:53.719 --rc geninfo_all_blocks=1 00:04:53.719 --rc geninfo_unexecuted_blocks=1 00:04:53.719 00:04:53.719 ' 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.719 --rc genhtml_branch_coverage=1 00:04:53.719 --rc genhtml_function_coverage=1 00:04:53.719 --rc genhtml_legend=1 00:04:53.719 --rc geninfo_all_blocks=1 00:04:53.719 --rc geninfo_unexecuted_blocks=1 00:04:53.719 00:04:53.719 ' 00:04:53.719 14:57:55 version -- app/version.sh@17 -- # get_header_version major 00:04:53.719 14:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.719 14:57:55 version -- app/version.sh@17 -- # major=25 00:04:53.719 14:57:55 version -- app/version.sh@18 -- # get_header_version minor 00:04:53.719 14:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.719 14:57:55 version -- app/version.sh@18 -- # minor=1 00:04:53.719 14:57:55 version -- app/version.sh@19 -- # get_header_version patch 00:04:53.719 14:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.719 14:57:55 version -- app/version.sh@19 -- # patch=0 00:04:53.719 14:57:55 version -- app/version.sh@20 -- # get_header_version suffix 00:04:53.719 14:57:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # cut -f2 00:04:53.719 14:57:55 version -- app/version.sh@14 -- # tr -d '"' 00:04:53.719 14:57:55 version -- app/version.sh@20 -- # suffix=-pre 00:04:53.719 14:57:55 version -- app/version.sh@22 -- # version=25.1 00:04:53.719 14:57:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:53.719 14:57:55 version -- app/version.sh@28 -- # version=25.1rc0 00:04:53.719 14:57:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:53.719 14:57:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:53.719 14:57:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:53.719 14:57:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:53.719 00:04:53.719 real 0m0.245s 00:04:53.719 user 0m0.152s 00:04:53.719 sys 0m0.135s 00:04:53.719 14:57:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.719 14:57:55 version -- common/autotest_common.sh@10 -- # set +x 00:04:53.719 ************************************ 00:04:53.719 END TEST version 00:04:53.719 ************************************ 00:04:53.719 14:57:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:53.719 14:57:55 -- spdk/autotest.sh@194 -- # uname -s 00:04:53.719 14:57:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:53.719 14:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.719 14:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:53.719 14:57:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:53.719 14:57:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.719 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.719 14:57:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:53.719 14:57:55 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:53.719 14:57:55 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.719 14:57:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.719 14:57:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.719 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.719 ************************************ 00:04:53.719 START TEST nvmf_tcp 00:04:53.719 ************************************ 00:04:53.719 14:57:55 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:53.978 * Looking for test storage... 00:04:53.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.978 14:57:55 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.978 --rc genhtml_branch_coverage=1 00:04:53.978 --rc genhtml_function_coverage=1 00:04:53.978 --rc genhtml_legend=1 00:04:53.978 --rc geninfo_all_blocks=1 00:04:53.978 --rc geninfo_unexecuted_blocks=1 00:04:53.978 00:04:53.978 ' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.978 --rc genhtml_branch_coverage=1 00:04:53.978 --rc genhtml_function_coverage=1 00:04:53.978 --rc genhtml_legend=1 00:04:53.978 --rc geninfo_all_blocks=1 00:04:53.978 --rc geninfo_unexecuted_blocks=1 00:04:53.978 00:04:53.978 ' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.978 --rc genhtml_branch_coverage=1 00:04:53.978 --rc genhtml_function_coverage=1 00:04:53.978 --rc genhtml_legend=1 00:04:53.978 --rc geninfo_all_blocks=1 00:04:53.978 --rc geninfo_unexecuted_blocks=1 00:04:53.978 00:04:53.978 ' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.978 --rc genhtml_branch_coverage=1 00:04:53.978 --rc genhtml_function_coverage=1 00:04:53.978 --rc genhtml_legend=1 00:04:53.978 --rc geninfo_all_blocks=1 00:04:53.978 --rc geninfo_unexecuted_blocks=1 00:04:53.978 00:04:53.978 ' 00:04:53.978 14:57:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:53.978 14:57:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:53.978 14:57:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.978 14:57:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.978 ************************************ 00:04:53.978 START TEST nvmf_target_core 00:04:53.978 ************************************ 00:04:53.978 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:53.978 * Looking for test storage... 00:04:54.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.237 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.238 --rc genhtml_branch_coverage=1 00:04:54.238 --rc genhtml_function_coverage=1 00:04:54.238 --rc genhtml_legend=1 00:04:54.238 --rc geninfo_all_blocks=1 00:04:54.238 --rc geninfo_unexecuted_blocks=1 00:04:54.238 00:04:54.238 ' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.238 --rc genhtml_branch_coverage=1 00:04:54.238 --rc genhtml_function_coverage=1 00:04:54.238 --rc genhtml_legend=1 00:04:54.238 --rc geninfo_all_blocks=1 00:04:54.238 --rc geninfo_unexecuted_blocks=1 00:04:54.238 00:04:54.238 ' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.238 --rc genhtml_branch_coverage=1 00:04:54.238 --rc genhtml_function_coverage=1 00:04:54.238 --rc genhtml_legend=1 00:04:54.238 --rc geninfo_all_blocks=1 00:04:54.238 --rc geninfo_unexecuted_blocks=1 00:04:54.238 00:04:54.238 ' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.238 --rc genhtml_branch_coverage=1 00:04:54.238 --rc genhtml_function_coverage=1 00:04:54.238 --rc genhtml_legend=1 00:04:54.238 --rc geninfo_all_blocks=1 00:04:54.238 --rc geninfo_unexecuted_blocks=1 00:04:54.238 00:04:54.238 ' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:54.238 ************************************ 00:04:54.238 START TEST nvmf_abort 00:04:54.238 ************************************ 00:04:54.238 14:57:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:54.238 * Looking for test storage... 00:04:54.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:54.238 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.238 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.238 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.498 --rc genhtml_branch_coverage=1 00:04:54.498 --rc genhtml_function_coverage=1 00:04:54.498 --rc genhtml_legend=1 00:04:54.498 --rc geninfo_all_blocks=1 00:04:54.498 --rc geninfo_unexecuted_blocks=1 00:04:54.498 00:04:54.498 ' 00:04:54.498 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.498 --rc genhtml_branch_coverage=1 00:04:54.499 --rc genhtml_function_coverage=1 00:04:54.499 --rc genhtml_legend=1 00:04:54.499 --rc geninfo_all_blocks=1 00:04:54.499 --rc geninfo_unexecuted_blocks=1 00:04:54.499 00:04:54.499 ' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.499 --rc genhtml_branch_coverage=1 00:04:54.499 --rc genhtml_function_coverage=1 00:04:54.499 --rc genhtml_legend=1 00:04:54.499 --rc geninfo_all_blocks=1 00:04:54.499 --rc geninfo_unexecuted_blocks=1 00:04:54.499 00:04:54.499 ' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.499 --rc genhtml_branch_coverage=1 00:04:54.499 --rc genhtml_function_coverage=1 00:04:54.499 --rc genhtml_legend=1 00:04:54.499 --rc geninfo_all_blocks=1 00:04:54.499 --rc geninfo_unexecuted_blocks=1 00:04:54.499 00:04:54.499 ' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:54.499 14:57:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:01.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:01.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:01.072 Found net devices under 0000:af:00.0: cvl_0_0 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:01.072 Found net devices under 0000:af:00.1: cvl_0_1 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:01.072 14:58:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:01.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:01.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:05:01.072 00:05:01.072 --- 10.0.0.2 ping statistics --- 00:05:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.072 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:01.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:01.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:05:01.072 00:05:01.072 --- 10.0.0.1 ping statistics --- 00:05:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:01.072 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:01.072 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1252577 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1252577 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1252577 ']' 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.073 14:58:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 [2024-12-09 14:58:02.340202] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:01.073 [2024-12-09 14:58:02.340257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:01.073 [2024-12-09 14:58:02.420025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.073 [2024-12-09 14:58:02.463846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:01.073 [2024-12-09 14:58:02.463880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:01.073 [2024-12-09 14:58:02.463887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.073 [2024-12-09 14:58:02.463893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.073 [2024-12-09 14:58:02.463898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:01.073 [2024-12-09 14:58:02.465234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.073 [2024-12-09 14:58:02.465325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.073 [2024-12-09 14:58:02.465327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.638 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 [2024-12-09 14:58:03.215165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 Malloc0 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 Delay0 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 [2024-12-09 14:58:03.285488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.639 14:58:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:01.639 [2024-12-09 14:58:03.411489] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:04.181 Initializing NVMe Controllers 00:05:04.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:04.181 controller IO queue size 128 less than required 00:05:04.181 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:04.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:04.181 Initialization complete. Launching workers. 00:05:04.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37281 00:05:04.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37342, failed to submit 62 00:05:04.181 success 37285, unsuccessful 57, failed 0 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:04.181 rmmod nvme_tcp 00:05:04.181 rmmod nvme_fabrics 00:05:04.181 rmmod nvme_keyring 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1252577 ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1252577 ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252577' 00:05:04.181 killing process with pid 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1252577 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:04.181 14:58:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:06.192 00:05:06.192 real 0m11.898s 00:05:06.192 user 0m13.387s 00:05:06.192 sys 0m5.483s 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.192 ************************************ 00:05:06.192 END TEST nvmf_abort 00:05:06.192 ************************************ 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:06.192 ************************************ 00:05:06.192 START TEST nvmf_ns_hotplug_stress 00:05:06.192 ************************************ 00:05:06.192 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:06.192 * Looking for test storage... 00:05:06.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:06.452 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.452 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.452 14:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.452 --rc genhtml_branch_coverage=1 00:05:06.452 --rc genhtml_function_coverage=1 00:05:06.452 --rc genhtml_legend=1 00:05:06.452 --rc geninfo_all_blocks=1 00:05:06.452 --rc geninfo_unexecuted_blocks=1 00:05:06.452 00:05:06.452 ' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.452 --rc genhtml_branch_coverage=1 00:05:06.452 --rc genhtml_function_coverage=1 00:05:06.452 --rc genhtml_legend=1 00:05:06.452 --rc geninfo_all_blocks=1 00:05:06.452 --rc geninfo_unexecuted_blocks=1 00:05:06.452 00:05:06.452 ' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.452 --rc genhtml_branch_coverage=1 00:05:06.452 --rc genhtml_function_coverage=1 00:05:06.452 --rc genhtml_legend=1 00:05:06.452 --rc geninfo_all_blocks=1 00:05:06.452 --rc geninfo_unexecuted_blocks=1 00:05:06.452 00:05:06.452 ' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.452 --rc genhtml_branch_coverage=1 00:05:06.452 --rc genhtml_function_coverage=1 00:05:06.452 --rc genhtml_legend=1 00:05:06.452 --rc geninfo_all_blocks=1 00:05:06.452 --rc geninfo_unexecuted_blocks=1 00:05:06.452 00:05:06.452 ' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.452 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:06.453 14:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.022 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:13.022 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:13.022 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:13.022 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:13.022 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:13.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:13.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:13.023 Found net devices under 0000:af:00.0: cvl_0_0 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:13.023 Found net devices under 0000:af:00.1: cvl_0_1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:13.023 14:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:13.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:13.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:05:13.023 00:05:13.023 --- 10.0.0.2 ping statistics --- 00:05:13.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.023 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:13.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:13.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:05:13.023 00:05:13.023 --- 10.0.0.1 ping statistics --- 00:05:13.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:13.023 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:13.023 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1256682 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1256682 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1256682 ']' 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.024 [2024-12-09 14:58:14.200169] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:05:13.024 [2024-12-09 14:58:14.200228] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:13.024 [2024-12-09 14:58:14.275874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.024 [2024-12-09 14:58:14.313756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:13.024 [2024-12-09 14:58:14.313792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:13.024 [2024-12-09 14:58:14.313798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.024 [2024-12-09 14:58:14.313805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.024 [2024-12-09 14:58:14.313810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:13.024 [2024-12-09 14:58:14.315129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.024 [2024-12-09 14:58:14.315257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.024 [2024-12-09 14:58:14.315257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:13.024 [2024-12-09 14:58:14.624119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.024 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:13.283 14:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:13.283 [2024-12-09 14:58:15.045635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.283 14:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.541 14:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:13.799 Malloc0 00:05:13.799 14:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:14.058 Delay0 00:05:14.058 14:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.316 14:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:14.316 NULL1 00:05:14.316 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:14.575 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1257129 00:05:14.575 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:14.575 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:14.575 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.833 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.092 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:15.092 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:15.092 true 00:05:15.350 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:15.350 14:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.350 14:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.609 14:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:15.609 14:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:15.867 true 00:05:15.867 14:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:15.867 14:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.244 Read completed with error (sct=0, sc=11) 00:05:17.244 14:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.244 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.244 14:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:17.244 14:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:17.244 true 00:05:17.502 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:17.502 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.502 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.761 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:17.761 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:18.019 true 00:05:18.019 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:18.019 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.278 14:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.278 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:18.278 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:18.536 true 00:05:18.536 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:18.536 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.795 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.054 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:19.054 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:19.054 true 00:05:19.312 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:19.312 14:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.248 14:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.506 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:20.506 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:20.506 true 00:05:20.506 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:20.506 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.765 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.023 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:21.023 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:21.282 true 00:05:21.282 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:21.282 14:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.219 14:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.478 14:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:22.478 14:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:22.736 true 00:05:22.736 14:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:22.736 14:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.671 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.671 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:23.671 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:23.930 true 00:05:23.930 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:23.930 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.930 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.188 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:24.188 14:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:24.446 true 00:05:24.447 14:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:24.447 14:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.390 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.652 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:25.652 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:25.910 true 00:05:25.910 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:25.910 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.169 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.169 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:26.169 14:58:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:26.428 true 00:05:26.428 14:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:26.428 14:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.363 14:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.622 14:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:27.622 14:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:27.880 true 00:05:27.880 14:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:27.880 14:58:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.816 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.816 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:28.816 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:29.074 true 00:05:29.074 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:29.074 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.333 14:58:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.593 14:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:29.593 14:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:29.593 true 00:05:29.851 14:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:29.852 14:58:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.787 14:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.046 14:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:31.046 14:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:31.046 true 00:05:31.046 14:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:31.046 14:58:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.304 14:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.563 14:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:31.563 14:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:31.821 true 00:05:31.821 14:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:31.821 14:58:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.757 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.016 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:33.016 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:33.016 true 00:05:33.016 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:33.016 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.275 14:58:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.532 14:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:33.532 14:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:33.790 true 00:05:33.790 14:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:33.790 14:58:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 14:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.242 14:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:35.242 14:58:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:35.242 true 00:05:35.242 14:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:35.242 14:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.178 14:58:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.436 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:36.436 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:36.436 true 00:05:36.436 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:36.436 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.694 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.953 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:36.953 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:37.211 true 00:05:37.211 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:37.211 14:58:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.212 14:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.470 14:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:37.470 14:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:37.729 true 00:05:37.729 14:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:37.729 14:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.664 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.664 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:38.664 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:38.922 true 00:05:38.922 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:38.922 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.180 14:58:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.438 14:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:39.438 14:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:39.438 true 00:05:39.696 14:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:39.696 14:58:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.632 14:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.890 14:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:40.890 14:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:41.148 true 00:05:41.148 14:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:41.148 14:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.975 14:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.975 14:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:41.975 14:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:42.234 true 00:05:42.234 14:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:42.234 14:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.492 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.751 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:42.751 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:42.751 true 00:05:42.751 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:42.751 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.009 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.269 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:43.269 14:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:43.527 true 00:05:43.527 14:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:43.527 14:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.463 14:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.463 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:44.463 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:44.722 true 00:05:44.722 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:44.722 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.981 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.981 Initializing NVMe Controllers 00:05:44.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:44.981 Controller IO queue size 128, less than required. 00:05:44.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.981 Controller IO queue size 128, less than required. 00:05:44.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:44.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:44.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:44.981 Initialization complete. Launching workers. 00:05:44.981 ======================================================== 00:05:44.981 Latency(us) 00:05:44.981 Device Information : IOPS MiB/s Average min max 00:05:44.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1724.36 0.84 43155.28 2174.35 1013364.21 00:05:44.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15176.03 7.41 8414.37 1573.41 442618.11 00:05:44.981 ======================================================== 00:05:44.981 Total : 16900.39 8.25 11959.02 1573.41 1013364.21 00:05:44.981 00:05:44.981 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:44.981 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:45.240 true 00:05:45.240 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1257129 00:05:45.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1257129) - No such process 00:05:45.240 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1257129 00:05:45.240 14:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.498 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:45.756 null0 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.756 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:46.015 null1 00:05:46.015 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.015 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.015 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:46.273 null2 00:05:46.273 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.273 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.273 14:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:46.531 null3 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:46.531 null4 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.531 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:46.790 null5 00:05:46.790 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.790 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.790 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:47.048 null6 00:05:47.048 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.048 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.048 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:47.306 null7 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:47.306 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1262576 1262578 1262581 1262585 1262588 1262592 1262595 1262597 00:05:47.307 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.307 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:47.307 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.307 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.307 14:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.307 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.307 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.307 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.565 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.566 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.824 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.083 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.347 14:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.347 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.347 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.347 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.347 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.348 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.660 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.029 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.030 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.030 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.290 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.549 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.807 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.808 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.808 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.066 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.325 14:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.325 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.584 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.843 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.844 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:51.102 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.103 14:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:51.362 rmmod nvme_tcp 00:05:51.362 rmmod nvme_fabrics 00:05:51.362 rmmod nvme_keyring 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1256682 ']' 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1256682 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1256682 ']' 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1256682 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.362 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1256682 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1256682' 00:05:51.622 killing process with pid 1256682 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1256682 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1256682 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.622 14:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:54.162 00:05:54.162 real 0m47.517s 00:05:54.162 user 3m14.855s 00:05:54.162 sys 0m15.394s 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.162 ************************************ 00:05:54.162 END TEST nvmf_ns_hotplug_stress 00:05:54.162 ************************************ 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.162 ************************************ 00:05:54.162 START TEST nvmf_delete_subsystem 00:05:54.162 ************************************ 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:54.162 * Looking for test storage... 00:05:54.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.162 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:54.163 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.164 14:58:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:00.735 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.735 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:00.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:00.736 Found net devices under 0000:af:00.0: cvl_0_0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:00.736 Found net devices under 0000:af:00.1: cvl_0_1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:06:00.736 00:06:00.736 --- 10.0.0.2 ping statistics --- 00:06:00.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.736 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:06:00.736 00:06:00.736 --- 10.0.0.1 ping statistics --- 00:06:00.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.736 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1267053 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1267053 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1267053 ']' 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.736 [2024-12-09 14:59:01.756478] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:00.736 [2024-12-09 14:59:01.756527] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.736 [2024-12-09 14:59:01.835738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.736 [2024-12-09 14:59:01.875508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.736 [2024-12-09 14:59:01.875539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.736 [2024-12-09 14:59:01.875546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.736 [2024-12-09 14:59:01.875552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.736 [2024-12-09 14:59:01.875558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.736 [2024-12-09 14:59:01.876722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.736 [2024-12-09 14:59:01.876722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:00.736 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.737 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.737 14:59:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 [2024-12-09 14:59:02.020531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 [2024-12-09 14:59:02.040731] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 NULL1 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 Delay0 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1267082 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:00.737 14:59:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:00.737 [2024-12-09 14:59:02.162533] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:02.639 14:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:02.639 14:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.639 14:59:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 starting I/O failed: -6 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 [2024-12-09 14:59:04.279859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445780 is same with the state(6) to be set 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Read completed with error (sct=0, sc=8) 00:06:02.639 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 starting I/O failed: -6 00:06:02.640 [2024-12-09 14:59:04.282369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e20000c40 is same with the state(6) to be set 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Write completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:02.640 Read completed with error (sct=0, sc=8) 00:06:03.575 [2024-12-09 14:59:05.256900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14469b0 is same with the state(6) to be set 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 [2024-12-09 14:59:05.283358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445960 is same with the state(6) to be set 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 [2024-12-09 14:59:05.283908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14452c0 is same with the state(6) to be set 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 [2024-12-09 14:59:05.285168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e2000d020 is same with the state(6) to be set 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Read completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 Write completed with error (sct=0, sc=8) 00:06:03.575 [2024-12-09 14:59:05.285715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3e2000d7c0 is same with the state(6) to be set 00:06:03.575 Initializing NVMe Controllers 00:06:03.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:03.575 Controller IO queue size 128, less than required. 00:06:03.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:03.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:03.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:03.575 Initialization complete. Launching workers. 00:06:03.575 ======================================================== 00:06:03.575 Latency(us) 00:06:03.575 Device Information : IOPS MiB/s Average min max 00:06:03.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.06 0.09 879996.08 344.94 1008004.89 00:06:03.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.13 0.08 907358.46 259.22 1010208.44 00:06:03.575 ======================================================== 00:06:03.575 Total : 341.19 0.17 893158.74 259.22 1010208.44 00:06:03.575 00:06:03.575 [2024-12-09 14:59:05.286284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14469b0 (9): Bad file descriptor 00:06:03.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:03.575 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.575 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:03.575 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1267082 00:06:03.575 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1267082 00:06:04.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1267082) - No such process 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1267082 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1267082 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1267082 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.141 [2024-12-09 14:59:05.815257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1267757 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:04.141 14:59:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.141 [2024-12-09 14:59:05.904013] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:04.706 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.706 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:04.706 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.275 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.275 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:05.275 14:59:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.842 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.842 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:05.842 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.100 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.100 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:06.100 14:59:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.667 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.667 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:06.667 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.234 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.234 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:07.234 14:59:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.234 Initializing NVMe Controllers 00:06:07.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.234 Controller IO queue size 128, less than required. 00:06:07.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:07.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:07.234 Initialization complete. Launching workers. 00:06:07.234 ======================================================== 00:06:07.234 Latency(us) 00:06:07.234 Device Information : IOPS MiB/s Average min max 00:06:07.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002643.40 1000134.81 1008815.20 00:06:07.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003332.55 1000113.98 1010220.30 00:06:07.234 ======================================================== 00:06:07.234 Total : 256.00 0.12 1002987.97 1000113.98 1010220.30 00:06:07.234 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1267757 00:06:07.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1267757) - No such process 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1267757 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.802 rmmod nvme_tcp 00:06:07.802 rmmod nvme_fabrics 00:06:07.802 rmmod nvme_keyring 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1267053 ']' 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1267053 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1267053 ']' 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1267053 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1267053 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1267053' 00:06:07.802 killing process with pid 1267053 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1267053 00:06:07.802 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1267053 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.062 14:59:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.967 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.967 00:06:09.967 real 0m16.229s 00:06:09.967 user 0m29.210s 00:06:09.967 sys 0m5.471s 00:06:09.967 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.967 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.967 ************************************ 00:06:09.967 END TEST nvmf_delete_subsystem 00:06:09.967 ************************************ 00:06:09.967 14:59:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.226 ************************************ 00:06:10.226 START TEST nvmf_host_management 00:06:10.226 ************************************ 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.226 * Looking for test storage... 00:06:10.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.226 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.227 --rc genhtml_branch_coverage=1 00:06:10.227 --rc genhtml_function_coverage=1 00:06:10.227 --rc genhtml_legend=1 00:06:10.227 --rc geninfo_all_blocks=1 00:06:10.227 --rc geninfo_unexecuted_blocks=1 00:06:10.227 00:06:10.227 ' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.227 --rc genhtml_branch_coverage=1 00:06:10.227 --rc genhtml_function_coverage=1 00:06:10.227 --rc genhtml_legend=1 00:06:10.227 --rc geninfo_all_blocks=1 00:06:10.227 --rc geninfo_unexecuted_blocks=1 00:06:10.227 00:06:10.227 ' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.227 --rc genhtml_branch_coverage=1 00:06:10.227 --rc genhtml_function_coverage=1 00:06:10.227 --rc genhtml_legend=1 00:06:10.227 --rc geninfo_all_blocks=1 00:06:10.227 --rc geninfo_unexecuted_blocks=1 00:06:10.227 00:06:10.227 ' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.227 --rc genhtml_branch_coverage=1 00:06:10.227 --rc genhtml_function_coverage=1 00:06:10.227 --rc genhtml_legend=1 00:06:10.227 --rc geninfo_all_blocks=1 00:06:10.227 --rc geninfo_unexecuted_blocks=1 00:06:10.227 00:06:10.227 ' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.227 14:59:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.227 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.487 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.487 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.487 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.487 14:59:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:17.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:17.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:17.058 Found net devices under 0000:af:00.0: cvl_0_0 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:17.058 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:17.059 Found net devices under 0000:af:00.1: cvl_0_1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:17.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:06:17.059 00:06:17.059 --- 10.0.0.2 ping statistics --- 00:06:17.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.059 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:17.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:06:17.059 00:06:17.059 --- 10.0.0.1 ping statistics --- 00:06:17.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.059 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.059 14:59:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1271888 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1271888 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1271888 ']' 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 [2024-12-09 14:59:18.062040] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:17.059 [2024-12-09 14:59:18.062089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.059 [2024-12-09 14:59:18.146759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.059 [2024-12-09 14:59:18.188397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.059 [2024-12-09 14:59:18.188432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.059 [2024-12-09 14:59:18.188440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.059 [2024-12-09 14:59:18.188446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.059 [2024-12-09 14:59:18.188451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.059 [2024-12-09 14:59:18.189981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.059 [2024-12-09 14:59:18.190086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.059 [2024-12-09 14:59:18.190102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.059 [2024-12-09 14:59:18.190105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 [2024-12-09 14:59:18.335306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 Malloc0 00:06:17.059 [2024-12-09 14:59:18.416815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.059 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1272000 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1272000 /var/tmp/bdevperf.sock 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1272000 ']' 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:17.060 { 00:06:17.060 "params": { 00:06:17.060 "name": "Nvme$subsystem", 00:06:17.060 "trtype": "$TEST_TRANSPORT", 00:06:17.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:17.060 "adrfam": "ipv4", 00:06:17.060 "trsvcid": "$NVMF_PORT", 00:06:17.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:17.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:17.060 "hdgst": ${hdgst:-false}, 00:06:17.060 "ddgst": ${ddgst:-false} 00:06:17.060 }, 00:06:17.060 "method": "bdev_nvme_attach_controller" 00:06:17.060 } 00:06:17.060 EOF 00:06:17.060 )") 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:17.060 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:17.060 "params": { 00:06:17.060 "name": "Nvme0", 00:06:17.060 "trtype": "tcp", 00:06:17.060 "traddr": "10.0.0.2", 00:06:17.060 "adrfam": "ipv4", 00:06:17.060 "trsvcid": "4420", 00:06:17.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:17.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:17.060 "hdgst": false, 00:06:17.060 "ddgst": false 00:06:17.060 }, 00:06:17.060 "method": "bdev_nvme_attach_controller" 00:06:17.060 }' 00:06:17.060 [2024-12-09 14:59:18.511068] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:17.060 [2024-12-09 14:59:18.511113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272000 ] 00:06:17.060 [2024-12-09 14:59:18.585594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.060 [2024-12-09 14:59:18.625322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.060 Running I/O for 10 seconds... 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=82 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 82 -ge 100 ']' 00:06:17.319 14:59:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.580 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.580 [2024-12-09 14:59:19.227505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.580 [2024-12-09 14:59:19.227720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.227924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0b710 is same with the state(6) to be set 00:06:17.581 [2024-12-09 14:59:19.228002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.581 [2024-12-09 14:59:19.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.581 [2024-12-09 14:59:19.228396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.582 [2024-12-09 14:59:19.228977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.582 [2024-12-09 14:59:19.228983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.583 [2024-12-09 14:59:19.228991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110550 is same with the state(6) to be set 00:06:17.583 [2024-12-09 14:59:19.229948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:17.583 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:17.583 00:06:17.583 Latency(us) 00:06:17.583 [2024-12-09T13:59:19.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.583 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.583 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:17.583 Verification LBA range: start 0x0 length 0x400 00:06:17.583 Nvme0n1 : 0.40 1897.90 118.62 158.16 0.00 30304.93 3698.10 26963.38 00:06:17.583 [2024-12-09T13:59:19.378Z] =================================================================================================================== 00:06:17.583 [2024-12-09T13:59:19.378Z] Total : 1897.90 118.62 158.16 0.00 30304.93 3698.10 26963.38 00:06:17.583 [2024-12-09 14:59:19.232389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.583 [2024-12-09 14:59:19.232412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fcaa0 (9): Bad file descriptor 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.583 [2024-12-09 14:59:19.239344] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:17.583 [2024-12-09 14:59:19.239547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:17.583 [2024-12-09 14:59:19.239571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.583 [2024-12-09 14:59:19.239584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:17.583 [2024-12-09 14:59:19.239591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:17.583 [2024-12-09 14:59:19.239598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:17.583 [2024-12-09 14:59:19.239604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x10fcaa0 00:06:17.583 [2024-12-09 14:59:19.239623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fcaa0 (9): Bad file descriptor 00:06:17.583 [2024-12-09 14:59:19.239634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:17.583 [2024-12-09 14:59:19.239641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:17.583 [2024-12-09 14:59:19.239649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:17.583 [2024-12-09 14:59:19.239657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.583 14:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1272000 00:06:18.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1272000) - No such process 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:18.518 { 00:06:18.518 "params": { 00:06:18.518 "name": "Nvme$subsystem", 00:06:18.518 "trtype": "$TEST_TRANSPORT", 00:06:18.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:18.518 "adrfam": "ipv4", 00:06:18.518 "trsvcid": "$NVMF_PORT", 00:06:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:18.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:18.518 "hdgst": ${hdgst:-false}, 00:06:18.518 "ddgst": ${ddgst:-false} 00:06:18.518 }, 00:06:18.518 "method": "bdev_nvme_attach_controller" 00:06:18.518 } 00:06:18.518 EOF 00:06:18.518 )") 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:18.518 14:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:18.518 "params": { 00:06:18.518 "name": "Nvme0", 00:06:18.518 "trtype": "tcp", 00:06:18.518 "traddr": "10.0.0.2", 00:06:18.518 "adrfam": "ipv4", 00:06:18.518 "trsvcid": "4420", 00:06:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:18.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:18.518 "hdgst": false, 00:06:18.518 "ddgst": false 00:06:18.518 }, 00:06:18.518 "method": "bdev_nvme_attach_controller" 00:06:18.518 }' 00:06:18.518 [2024-12-09 14:59:20.300422] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:18.518 [2024-12-09 14:59:20.300468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272250 ] 00:06:18.777 [2024-12-09 14:59:20.374591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.777 [2024-12-09 14:59:20.415158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.036 Running I/O for 1 seconds... 00:06:19.973 1984.00 IOPS, 124.00 MiB/s 00:06:19.973 Latency(us) 00:06:19.973 [2024-12-09T13:59:21.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.973 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:19.973 Verification LBA range: start 0x0 length 0x400 00:06:19.973 Nvme0n1 : 1.01 2033.64 127.10 0.00 0.00 30975.98 7458.62 26838.55 00:06:19.973 [2024-12-09T13:59:21.768Z] =================================================================================================================== 00:06:19.973 [2024-12-09T13:59:21.768Z] Total : 2033.64 127.10 0.00 0.00 30975.98 7458.62 26838.55 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.232 rmmod nvme_tcp 00:06:20.232 rmmod nvme_fabrics 00:06:20.232 rmmod nvme_keyring 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1271888 ']' 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1271888 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1271888 ']' 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1271888 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1271888 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1271888' 00:06:20.232 killing process with pid 1271888 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1271888 00:06:20.232 14:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1271888 00:06:20.492 [2024-12-09 14:59:22.141931] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.492 14:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.029 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:23.030 00:06:23.030 real 0m12.440s 00:06:23.030 user 0m19.907s 00:06:23.030 sys 0m5.510s 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.030 ************************************ 00:06:23.030 END TEST nvmf_host_management 00:06:23.030 ************************************ 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.030 ************************************ 00:06:23.030 START TEST nvmf_lvol 00:06:23.030 ************************************ 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.030 * Looking for test storage... 00:06:23.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.030 --rc genhtml_branch_coverage=1 00:06:23.030 --rc genhtml_function_coverage=1 00:06:23.030 --rc genhtml_legend=1 00:06:23.030 --rc geninfo_all_blocks=1 00:06:23.030 --rc geninfo_unexecuted_blocks=1 00:06:23.030 00:06:23.030 ' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.030 --rc genhtml_branch_coverage=1 00:06:23.030 --rc genhtml_function_coverage=1 00:06:23.030 --rc genhtml_legend=1 00:06:23.030 --rc geninfo_all_blocks=1 00:06:23.030 --rc geninfo_unexecuted_blocks=1 00:06:23.030 00:06:23.030 ' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.030 --rc genhtml_branch_coverage=1 00:06:23.030 --rc genhtml_function_coverage=1 00:06:23.030 --rc genhtml_legend=1 00:06:23.030 --rc geninfo_all_blocks=1 00:06:23.030 --rc geninfo_unexecuted_blocks=1 00:06:23.030 00:06:23.030 ' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.030 --rc genhtml_branch_coverage=1 00:06:23.030 --rc genhtml_function_coverage=1 00:06:23.030 --rc genhtml_legend=1 00:06:23.030 --rc geninfo_all_blocks=1 00:06:23.030 --rc geninfo_unexecuted_blocks=1 00:06:23.030 00:06:23.030 ' 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.030 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.031 14:59:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:29.606 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:29.606 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:29.606 Found net devices under 0000:af:00.0: cvl_0_0 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:29.606 Found net devices under 0000:af:00.1: cvl_0_1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:06:29.606 00:06:29.606 --- 10.0.0.2 ping statistics --- 00:06:29.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.606 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:06:29.606 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:06:29.606 00:06:29.606 --- 10.0.0.1 ping statistics --- 00:06:29.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.606 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1276127 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1276127 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1276127 ']' 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.607 [2024-12-09 14:59:30.563525] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:29.607 [2024-12-09 14:59:30.563574] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.607 [2024-12-09 14:59:30.643967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.607 [2024-12-09 14:59:30.684284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.607 [2024-12-09 14:59:30.684320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.607 [2024-12-09 14:59:30.684327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.607 [2024-12-09 14:59:30.684333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.607 [2024-12-09 14:59:30.684338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.607 [2024-12-09 14:59:30.685642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.607 [2024-12-09 14:59:30.685751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.607 [2024-12-09 14:59:30.685753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.607 14:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.607 [2024-12-09 14:59:30.995319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.607 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.607 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:29.607 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.866 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:29.866 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:30.125 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.125 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=043f20e5-177b-4571-b486-f60de49ee4f2 00:06:30.125 14:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 043f20e5-177b-4571-b486-f60de49ee4f2 lvol 20 00:06:30.384 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f8d9b9e3-cb07-4444-95ed-13f1d0f95c88 00:06:30.384 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.643 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8d9b9e3-cb07-4444-95ed-13f1d0f95c88 00:06:30.902 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:30.902 [2024-12-09 14:59:32.674176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.161 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.161 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1276479 00:06:31.161 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:31.161 14:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:32.539 14:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f8d9b9e3-cb07-4444-95ed-13f1d0f95c88 MY_SNAPSHOT 00:06:32.539 14:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0c79169e-0de4-40fa-9091-9a7aa90a8888 00:06:32.539 14:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f8d9b9e3-cb07-4444-95ed-13f1d0f95c88 30 00:06:32.798 14:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0c79169e-0de4-40fa-9091-9a7aa90a8888 MY_CLONE 00:06:33.131 14:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d4769e29-37c9-4ae8-a227-8f9f1d0759fc 00:06:33.131 14:59:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d4769e29-37c9-4ae8-a227-8f9f1d0759fc 00:06:33.389 14:59:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1276479 00:06:43.368 Initializing NVMe Controllers 00:06:43.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:43.368 Controller IO queue size 128, less than required. 00:06:43.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:43.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:43.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:43.368 Initialization complete. Launching workers. 00:06:43.368 ======================================================== 00:06:43.368 Latency(us) 00:06:43.368 Device Information : IOPS MiB/s Average min max 00:06:43.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12422.54 48.53 10308.24 1520.23 118542.01 00:06:43.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12308.05 48.08 10398.97 3214.16 57534.65 00:06:43.368 ======================================================== 00:06:43.368 Total : 24730.60 96.60 10353.40 1520.23 118542.01 00:06:43.368 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8d9b9e3-cb07-4444-95ed-13f1d0f95c88 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 043f20e5-177b-4571-b486-f60de49ee4f2 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.368 rmmod nvme_tcp 00:06:43.368 rmmod nvme_fabrics 00:06:43.368 rmmod nvme_keyring 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1276127 ']' 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1276127 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1276127 ']' 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1276127 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.368 14:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1276127 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1276127' 00:06:43.368 killing process with pid 1276127 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1276127 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1276127 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.368 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.369 14:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.744 00:06:44.744 real 0m21.997s 00:06:44.744 user 1m3.334s 00:06:44.744 sys 0m7.641s 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.744 ************************************ 00:06:44.744 END TEST nvmf_lvol 00:06:44.744 ************************************ 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.744 ************************************ 00:06:44.744 START TEST nvmf_lvs_grow 00:06:44.744 ************************************ 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:44.744 * Looking for test storage... 00:06:44.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.744 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.003 --rc genhtml_branch_coverage=1 00:06:45.003 --rc genhtml_function_coverage=1 00:06:45.003 --rc genhtml_legend=1 00:06:45.003 --rc geninfo_all_blocks=1 00:06:45.003 --rc geninfo_unexecuted_blocks=1 00:06:45.003 00:06:45.003 ' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.003 --rc genhtml_branch_coverage=1 00:06:45.003 --rc genhtml_function_coverage=1 00:06:45.003 --rc genhtml_legend=1 00:06:45.003 --rc geninfo_all_blocks=1 00:06:45.003 --rc geninfo_unexecuted_blocks=1 00:06:45.003 00:06:45.003 ' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.003 --rc genhtml_branch_coverage=1 00:06:45.003 --rc genhtml_function_coverage=1 00:06:45.003 --rc genhtml_legend=1 00:06:45.003 --rc geninfo_all_blocks=1 00:06:45.003 --rc geninfo_unexecuted_blocks=1 00:06:45.003 00:06:45.003 ' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.003 --rc genhtml_branch_coverage=1 00:06:45.003 --rc genhtml_function_coverage=1 00:06:45.003 --rc genhtml_legend=1 00:06:45.003 --rc geninfo_all_blocks=1 00:06:45.003 --rc geninfo_unexecuted_blocks=1 00:06:45.003 00:06:45.003 ' 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.003 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.004 14:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:51.576 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:51.576 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.576 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:51.576 Found net devices under 0000:af:00.0: cvl_0_0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:51.577 Found net devices under 0000:af:00.1: cvl_0_1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:06:51.577 00:06:51.577 --- 10.0.0.2 ping statistics --- 00:06:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.577 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:06:51.577 00:06:51.577 --- 10.0.0.1 ping statistics --- 00:06:51.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.577 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1281949 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1281949 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1281949 ']' 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.577 [2024-12-09 14:59:52.680381] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:51.577 [2024-12-09 14:59:52.680423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.577 [2024-12-09 14:59:52.755968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.577 [2024-12-09 14:59:52.795212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.577 [2024-12-09 14:59:52.795254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.577 [2024-12-09 14:59:52.795261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.577 [2024-12-09 14:59:52.795269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.577 [2024-12-09 14:59:52.795275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.577 [2024-12-09 14:59:52.795803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.577 14:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.577 [2024-12-09 14:59:53.103317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:51.577 ************************************ 00:06:51.577 START TEST lvs_grow_clean 00:06:51.577 ************************************ 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.577 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.578 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:51.836 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:51.836 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:51.836 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7dafeb9d-033e-4a58-a024-b1f461436b31 00:06:51.836 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:06:51.836 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:52.095 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:52.095 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:52.095 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7dafeb9d-033e-4a58-a024-b1f461436b31 lvol 150 00:06:52.385 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f00169a9-6212-4aca-a6ad-c7153dbf5370 00:06:52.385 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:52.385 14:59:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:52.385 [2024-12-09 14:59:54.120071] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:52.385 [2024-12-09 14:59:54.120120] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:52.385 true 00:06:52.385 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:06:52.385 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:52.666 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:52.666 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.931 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f00169a9-6212-4aca-a6ad-c7153dbf5370 00:06:52.931 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.200 [2024-12-09 14:59:54.866321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.200 14:59:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1282298 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1282298 /var/tmp/bdevperf.sock 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1282298 ']' 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.513 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:53.513 [2024-12-09 14:59:55.110487] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:06:53.513 [2024-12-09 14:59:55.110536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282298 ] 00:06:53.513 [2024-12-09 14:59:55.177870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.513 [2024-12-09 14:59:55.216971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.771 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.771 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:53.771 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:54.029 Nvme0n1 00:06:54.029 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:54.288 [ 00:06:54.288 { 00:06:54.288 "name": "Nvme0n1", 00:06:54.288 "aliases": [ 00:06:54.288 "f00169a9-6212-4aca-a6ad-c7153dbf5370" 00:06:54.288 ], 00:06:54.288 "product_name": "NVMe disk", 00:06:54.288 "block_size": 4096, 00:06:54.288 "num_blocks": 38912, 00:06:54.288 "uuid": "f00169a9-6212-4aca-a6ad-c7153dbf5370", 00:06:54.288 "numa_id": 1, 00:06:54.288 "assigned_rate_limits": { 00:06:54.288 "rw_ios_per_sec": 0, 00:06:54.288 "rw_mbytes_per_sec": 0, 00:06:54.288 "r_mbytes_per_sec": 0, 00:06:54.288 "w_mbytes_per_sec": 0 00:06:54.288 }, 00:06:54.288 "claimed": false, 00:06:54.288 "zoned": false, 00:06:54.288 "supported_io_types": { 00:06:54.288 "read": true, 00:06:54.288 "write": true, 00:06:54.288 "unmap": true, 00:06:54.288 "flush": true, 00:06:54.288 "reset": true, 00:06:54.288 "nvme_admin": true, 00:06:54.288 "nvme_io": true, 00:06:54.288 "nvme_io_md": false, 00:06:54.288 "write_zeroes": true, 00:06:54.288 "zcopy": false, 00:06:54.288 "get_zone_info": false, 00:06:54.288 "zone_management": false, 00:06:54.288 "zone_append": false, 00:06:54.288 "compare": true, 00:06:54.288 "compare_and_write": true, 00:06:54.288 "abort": true, 00:06:54.288 "seek_hole": false, 00:06:54.288 "seek_data": false, 00:06:54.288 "copy": true, 00:06:54.288 "nvme_iov_md": false 00:06:54.288 }, 00:06:54.288 "memory_domains": [ 00:06:54.288 { 00:06:54.288 "dma_device_id": "system", 00:06:54.288 "dma_device_type": 1 00:06:54.288 } 00:06:54.288 ], 00:06:54.288 "driver_specific": { 00:06:54.288 "nvme": [ 00:06:54.288 { 00:06:54.288 "trid": { 00:06:54.288 "trtype": "TCP", 00:06:54.288 "adrfam": "IPv4", 00:06:54.288 "traddr": "10.0.0.2", 00:06:54.288 "trsvcid": "4420", 00:06:54.288 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:54.288 }, 00:06:54.288 "ctrlr_data": { 00:06:54.288 "cntlid": 1, 00:06:54.288 "vendor_id": "0x8086", 00:06:54.288 "model_number": "SPDK bdev Controller", 00:06:54.288 "serial_number": "SPDK0", 00:06:54.288 "firmware_revision": "25.01", 00:06:54.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:54.288 "oacs": { 00:06:54.288 "security": 0, 00:06:54.288 "format": 0, 00:06:54.288 "firmware": 0, 00:06:54.288 "ns_manage": 0 00:06:54.288 }, 00:06:54.288 "multi_ctrlr": true, 00:06:54.288 "ana_reporting": false 00:06:54.288 }, 00:06:54.288 "vs": { 00:06:54.288 "nvme_version": "1.3" 00:06:54.288 }, 00:06:54.288 "ns_data": { 00:06:54.288 "id": 1, 00:06:54.288 "can_share": true 00:06:54.288 } 00:06:54.288 } 00:06:54.288 ], 00:06:54.288 "mp_policy": "active_passive" 00:06:54.288 } 00:06:54.288 } 00:06:54.288 ] 00:06:54.288 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1282530 00:06:54.288 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:54.288 14:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:54.288 Running I/O for 10 seconds... 00:06:55.223 Latency(us) 00:06:55.223 [2024-12-09T13:59:57.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.223 Nvme0n1 : 1.00 23754.00 92.79 0.00 0.00 0.00 0.00 0.00 00:06:55.223 [2024-12-09T13:59:57.019Z] =================================================================================================================== 00:06:55.224 [2024-12-09T13:59:57.019Z] Total : 23754.00 92.79 0.00 0.00 0.00 0.00 0.00 00:06:55.224 00:06:56.158 14:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:06:56.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.417 Nvme0n1 : 2.00 23874.50 93.26 0.00 0.00 0.00 0.00 0.00 00:06:56.417 [2024-12-09T13:59:58.212Z] =================================================================================================================== 00:06:56.417 [2024-12-09T13:59:58.212Z] Total : 23874.50 93.26 0.00 0.00 0.00 0.00 0.00 00:06:56.417 00:06:56.417 true 00:06:56.417 14:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:06:56.417 14:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:56.676 14:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:56.676 14:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:56.676 14:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1282530 00:06:57.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.242 Nvme0n1 : 3.00 23886.00 93.30 0.00 0.00 0.00 0.00 0.00 00:06:57.242 [2024-12-09T13:59:59.037Z] =================================================================================================================== 00:06:57.242 [2024-12-09T13:59:59.037Z] Total : 23886.00 93.30 0.00 0.00 0.00 0.00 0.00 00:06:57.242 00:06:58.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.620 Nvme0n1 : 4.00 23956.50 93.58 0.00 0.00 0.00 0.00 0.00 00:06:58.620 [2024-12-09T14:00:00.415Z] =================================================================================================================== 00:06:58.620 [2024-12-09T14:00:00.415Z] Total : 23956.50 93.58 0.00 0.00 0.00 0.00 0.00 00:06:58.620 00:06:59.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.556 Nvme0n1 : 5.00 23873.20 93.25 0.00 0.00 0.00 0.00 0.00 00:06:59.556 [2024-12-09T14:00:01.351Z] =================================================================================================================== 00:06:59.556 [2024-12-09T14:00:01.351Z] Total : 23873.20 93.25 0.00 0.00 0.00 0.00 0.00 00:06:59.556 00:07:00.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.492 Nvme0n1 : 6.00 23904.17 93.38 0.00 0.00 0.00 0.00 0.00 00:07:00.492 [2024-12-09T14:00:02.287Z] =================================================================================================================== 00:07:00.492 [2024-12-09T14:00:02.287Z] Total : 23904.17 93.38 0.00 0.00 0.00 0.00 0.00 00:07:00.492 00:07:01.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.428 Nvme0n1 : 7.00 23950.86 93.56 0.00 0.00 0.00 0.00 0.00 00:07:01.428 [2024-12-09T14:00:03.223Z] =================================================================================================================== 00:07:01.428 [2024-12-09T14:00:03.223Z] Total : 23950.86 93.56 0.00 0.00 0.00 0.00 0.00 00:07:01.428 00:07:02.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.363 Nvme0n1 : 8.00 23997.25 93.74 0.00 0.00 0.00 0.00 0.00 00:07:02.363 [2024-12-09T14:00:04.158Z] =================================================================================================================== 00:07:02.363 [2024-12-09T14:00:04.158Z] Total : 23997.25 93.74 0.00 0.00 0.00 0.00 0.00 00:07:02.363 00:07:03.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.299 Nvme0n1 : 9.00 24021.22 93.83 0.00 0.00 0.00 0.00 0.00 00:07:03.299 [2024-12-09T14:00:05.094Z] =================================================================================================================== 00:07:03.299 [2024-12-09T14:00:05.094Z] Total : 24021.22 93.83 0.00 0.00 0.00 0.00 0.00 00:07:03.299 00:07:04.237 00:07:04.237 Latency(us) 00:07:04.237 [2024-12-09T14:00:06.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.237 Nvme0n1 : 10.00 24047.54 93.94 0.00 0.00 5319.96 3120.76 10610.59 00:07:04.237 [2024-12-09T14:00:06.032Z] =================================================================================================================== 00:07:04.237 [2024-12-09T14:00:06.032Z] Total : 24047.54 93.94 0.00 0.00 5319.96 3120.76 10610.59 00:07:04.237 { 00:07:04.237 "results": [ 00:07:04.237 { 00:07:04.237 "job": "Nvme0n1", 00:07:04.237 "core_mask": "0x2", 00:07:04.237 "workload": "randwrite", 00:07:04.237 "status": "finished", 00:07:04.237 "queue_depth": 128, 00:07:04.237 "io_size": 4096, 00:07:04.237 "runtime": 10.001773, 00:07:04.237 "iops": 24047.53637180128, 00:07:04.237 "mibps": 93.93568895234876, 00:07:04.237 "io_failed": 0, 00:07:04.237 "io_timeout": 0, 00:07:04.237 "avg_latency_us": 5319.961105027681, 00:07:04.237 "min_latency_us": 3120.7619047619046, 00:07:04.237 "max_latency_us": 10610.590476190477 00:07:04.237 } 00:07:04.237 ], 00:07:04.237 "core_count": 1 00:07:04.237 } 00:07:04.237 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1282298 00:07:04.237 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1282298 ']' 00:07:04.237 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1282298 00:07:04.237 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:04.495 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1282298 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1282298' 00:07:04.496 killing process with pid 1282298 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1282298 00:07:04.496 Received shutdown signal, test time was about 10.000000 seconds 00:07:04.496 00:07:04.496 Latency(us) 00:07:04.496 [2024-12-09T14:00:06.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.496 [2024-12-09T14:00:06.291Z] =================================================================================================================== 00:07:04.496 [2024-12-09T14:00:06.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1282298 00:07:04.496 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.754 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.012 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:05.012 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:05.271 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:05.271 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:05.271 15:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:05.271 [2024-12-09 15:00:07.058958] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:05.530 request: 00:07:05.530 { 00:07:05.530 "uuid": "7dafeb9d-033e-4a58-a024-b1f461436b31", 00:07:05.530 "method": "bdev_lvol_get_lvstores", 00:07:05.530 "req_id": 1 00:07:05.530 } 00:07:05.530 Got JSON-RPC error response 00:07:05.530 response: 00:07:05.530 { 00:07:05.530 "code": -19, 00:07:05.530 "message": "No such device" 00:07:05.530 } 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.530 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:05.788 aio_bdev 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f00169a9-6212-4aca-a6ad-c7153dbf5370 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f00169a9-6212-4aca-a6ad-c7153dbf5370 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.788 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:06.046 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f00169a9-6212-4aca-a6ad-c7153dbf5370 -t 2000 00:07:06.304 [ 00:07:06.304 { 00:07:06.304 "name": "f00169a9-6212-4aca-a6ad-c7153dbf5370", 00:07:06.304 "aliases": [ 00:07:06.304 "lvs/lvol" 00:07:06.304 ], 00:07:06.304 "product_name": "Logical Volume", 00:07:06.304 "block_size": 4096, 00:07:06.304 "num_blocks": 38912, 00:07:06.304 "uuid": "f00169a9-6212-4aca-a6ad-c7153dbf5370", 00:07:06.304 "assigned_rate_limits": { 00:07:06.304 "rw_ios_per_sec": 0, 00:07:06.304 "rw_mbytes_per_sec": 0, 00:07:06.304 "r_mbytes_per_sec": 0, 00:07:06.305 "w_mbytes_per_sec": 0 00:07:06.305 }, 00:07:06.305 "claimed": false, 00:07:06.305 "zoned": false, 00:07:06.305 "supported_io_types": { 00:07:06.305 "read": true, 00:07:06.305 "write": true, 00:07:06.305 "unmap": true, 00:07:06.305 "flush": false, 00:07:06.305 "reset": true, 00:07:06.305 "nvme_admin": false, 00:07:06.305 "nvme_io": false, 00:07:06.305 "nvme_io_md": false, 00:07:06.305 "write_zeroes": true, 00:07:06.305 "zcopy": false, 00:07:06.305 "get_zone_info": false, 00:07:06.305 "zone_management": false, 00:07:06.305 "zone_append": false, 00:07:06.305 "compare": false, 00:07:06.305 "compare_and_write": false, 00:07:06.305 "abort": false, 00:07:06.305 "seek_hole": true, 00:07:06.305 "seek_data": true, 00:07:06.305 "copy": false, 00:07:06.305 "nvme_iov_md": false 00:07:06.305 }, 00:07:06.305 "driver_specific": { 00:07:06.305 "lvol": { 00:07:06.305 "lvol_store_uuid": "7dafeb9d-033e-4a58-a024-b1f461436b31", 00:07:06.305 "base_bdev": "aio_bdev", 00:07:06.305 "thin_provision": false, 00:07:06.305 "num_allocated_clusters": 38, 00:07:06.305 "snapshot": false, 00:07:06.305 "clone": false, 00:07:06.305 "esnap_clone": false 00:07:06.305 } 00:07:06.305 } 00:07:06.305 } 00:07:06.305 ] 00:07:06.305 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:06.305 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:06.305 15:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:06.305 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:06.305 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:06.305 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:06.564 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:06.564 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f00169a9-6212-4aca-a6ad-c7153dbf5370 00:07:06.822 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7dafeb9d-033e-4a58-a024-b1f461436b31 00:07:07.079 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.080 00:07:07.080 real 0m15.655s 00:07:07.080 user 0m15.252s 00:07:07.080 sys 0m1.506s 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:07.080 ************************************ 00:07:07.080 END TEST lvs_grow_clean 00:07:07.080 ************************************ 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.080 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.338 ************************************ 00:07:07.338 START TEST lvs_grow_dirty 00:07:07.338 ************************************ 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:07.338 15:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.596 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:07.596 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.596 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:07.596 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:07.596 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.854 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.854 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.854 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 376bd9b3-a7a5-4297-804c-ebc8819a488a lvol 150 00:07:08.111 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:08.111 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.111 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.111 [2024-12-09 15:00:09.874084] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.111 [2024-12-09 15:00:09.874132] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.111 true 00:07:08.111 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:08.111 15:00:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.368 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.368 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.626 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:08.884 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.884 [2024-12-09 15:00:10.628317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.884 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1285604 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1285604 /var/tmp/bdevperf.sock 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1285604 ']' 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.142 15:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.142 [2024-12-09 15:00:10.867967] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:09.142 [2024-12-09 15:00:10.868014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285604 ] 00:07:09.400 [2024-12-09 15:00:10.940960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.400 [2024-12-09 15:00:10.979665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.400 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.400 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:09.400 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.657 Nvme0n1 00:07:09.657 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.915 [ 00:07:09.915 { 00:07:09.915 "name": "Nvme0n1", 00:07:09.915 "aliases": [ 00:07:09.915 "1172c427-86cb-4abc-9aff-c700c54f85c2" 00:07:09.915 ], 00:07:09.915 "product_name": "NVMe disk", 00:07:09.915 "block_size": 4096, 00:07:09.915 "num_blocks": 38912, 00:07:09.915 "uuid": "1172c427-86cb-4abc-9aff-c700c54f85c2", 00:07:09.915 "numa_id": 1, 00:07:09.915 "assigned_rate_limits": { 00:07:09.915 "rw_ios_per_sec": 0, 00:07:09.915 "rw_mbytes_per_sec": 0, 00:07:09.915 "r_mbytes_per_sec": 0, 00:07:09.915 "w_mbytes_per_sec": 0 00:07:09.915 }, 00:07:09.915 "claimed": false, 00:07:09.915 "zoned": false, 00:07:09.915 "supported_io_types": { 00:07:09.915 "read": true, 00:07:09.915 "write": true, 00:07:09.915 "unmap": true, 00:07:09.915 "flush": true, 00:07:09.915 "reset": true, 00:07:09.915 "nvme_admin": true, 00:07:09.915 "nvme_io": true, 00:07:09.915 "nvme_io_md": false, 00:07:09.915 "write_zeroes": true, 00:07:09.915 "zcopy": false, 00:07:09.915 "get_zone_info": false, 00:07:09.915 "zone_management": false, 00:07:09.915 "zone_append": false, 00:07:09.915 "compare": true, 00:07:09.915 "compare_and_write": true, 00:07:09.915 "abort": true, 00:07:09.915 "seek_hole": false, 00:07:09.915 "seek_data": false, 00:07:09.915 "copy": true, 00:07:09.915 "nvme_iov_md": false 00:07:09.915 }, 00:07:09.915 "memory_domains": [ 00:07:09.915 { 00:07:09.915 "dma_device_id": "system", 00:07:09.915 "dma_device_type": 1 00:07:09.915 } 00:07:09.915 ], 00:07:09.915 "driver_specific": { 00:07:09.915 "nvme": [ 00:07:09.915 { 00:07:09.915 "trid": { 00:07:09.915 "trtype": "TCP", 00:07:09.915 "adrfam": "IPv4", 00:07:09.915 "traddr": "10.0.0.2", 00:07:09.915 "trsvcid": "4420", 00:07:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.915 }, 00:07:09.915 "ctrlr_data": { 00:07:09.915 "cntlid": 1, 00:07:09.915 "vendor_id": "0x8086", 00:07:09.915 "model_number": "SPDK bdev Controller", 00:07:09.915 "serial_number": "SPDK0", 00:07:09.915 "firmware_revision": "25.01", 00:07:09.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.915 "oacs": { 00:07:09.915 "security": 0, 00:07:09.915 "format": 0, 00:07:09.915 "firmware": 0, 00:07:09.915 "ns_manage": 0 00:07:09.915 }, 00:07:09.915 "multi_ctrlr": true, 00:07:09.915 "ana_reporting": false 00:07:09.915 }, 00:07:09.915 "vs": { 00:07:09.915 "nvme_version": "1.3" 00:07:09.915 }, 00:07:09.916 "ns_data": { 00:07:09.916 "id": 1, 00:07:09.916 "can_share": true 00:07:09.916 } 00:07:09.916 } 00:07:09.916 ], 00:07:09.916 "mp_policy": "active_passive" 00:07:09.916 } 00:07:09.916 } 00:07:09.916 ] 00:07:09.916 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1285609 00:07:09.916 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.916 15:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.916 Running I/O for 10 seconds... 00:07:11.291 Latency(us) 00:07:11.291 [2024-12-09T14:00:13.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.291 Nvme0n1 : 1.00 23760.00 92.81 0.00 0.00 0.00 0.00 0.00 00:07:11.291 [2024-12-09T14:00:13.086Z] =================================================================================================================== 00:07:11.291 [2024-12-09T14:00:13.086Z] Total : 23760.00 92.81 0.00 0.00 0.00 0.00 0.00 00:07:11.291 00:07:11.858 15:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:12.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.115 Nvme0n1 : 2.00 23862.50 93.21 0.00 0.00 0.00 0.00 0.00 00:07:12.115 [2024-12-09T14:00:13.910Z] =================================================================================================================== 00:07:12.115 [2024-12-09T14:00:13.910Z] Total : 23862.50 93.21 0.00 0.00 0.00 0.00 0.00 00:07:12.115 00:07:12.115 true 00:07:12.115 15:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:12.115 15:00:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.373 15:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:12.373 15:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:12.373 15:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1285609 00:07:12.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.940 Nvme0n1 : 3.00 23911.00 93.40 0.00 0.00 0.00 0.00 0.00 00:07:12.940 [2024-12-09T14:00:14.735Z] =================================================================================================================== 00:07:12.940 [2024-12-09T14:00:14.735Z] Total : 23911.00 93.40 0.00 0.00 0.00 0.00 0.00 00:07:12.940 00:07:13.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.874 Nvme0n1 : 4.00 23966.25 93.62 0.00 0.00 0.00 0.00 0.00 00:07:13.874 [2024-12-09T14:00:15.669Z] =================================================================================================================== 00:07:13.874 [2024-12-09T14:00:15.669Z] Total : 23966.25 93.62 0.00 0.00 0.00 0.00 0.00 00:07:13.874 00:07:15.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.248 Nvme0n1 : 5.00 24007.20 93.78 0.00 0.00 0.00 0.00 0.00 00:07:15.248 [2024-12-09T14:00:17.043Z] =================================================================================================================== 00:07:15.248 [2024-12-09T14:00:17.043Z] Total : 24007.20 93.78 0.00 0.00 0.00 0.00 0.00 00:07:15.248 00:07:16.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.181 Nvme0n1 : 6.00 24031.83 93.87 0.00 0.00 0.00 0.00 0.00 00:07:16.181 [2024-12-09T14:00:17.976Z] =================================================================================================================== 00:07:16.181 [2024-12-09T14:00:17.976Z] Total : 24031.83 93.87 0.00 0.00 0.00 0.00 0.00 00:07:16.181 00:07:17.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.116 Nvme0n1 : 7.00 24068.14 94.02 0.00 0.00 0.00 0.00 0.00 00:07:17.116 [2024-12-09T14:00:18.911Z] =================================================================================================================== 00:07:17.116 [2024-12-09T14:00:18.911Z] Total : 24068.14 94.02 0.00 0.00 0.00 0.00 0.00 00:07:17.116 00:07:18.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.050 Nvme0n1 : 8.00 24090.12 94.10 0.00 0.00 0.00 0.00 0.00 00:07:18.050 [2024-12-09T14:00:19.845Z] =================================================================================================================== 00:07:18.050 [2024-12-09T14:00:19.845Z] Total : 24090.12 94.10 0.00 0.00 0.00 0.00 0.00 00:07:18.050 00:07:18.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.983 Nvme0n1 : 9.00 24082.22 94.07 0.00 0.00 0.00 0.00 0.00 00:07:18.983 [2024-12-09T14:00:20.778Z] =================================================================================================================== 00:07:18.983 [2024-12-09T14:00:20.778Z] Total : 24082.22 94.07 0.00 0.00 0.00 0.00 0.00 00:07:18.983 00:07:19.917 00:07:19.917 Latency(us) 00:07:19.917 [2024-12-09T14:00:21.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.917 Nvme0n1 : 10.00 24099.73 94.14 0.00 0.00 5308.37 2449.80 9924.02 00:07:19.917 [2024-12-09T14:00:21.712Z] =================================================================================================================== 00:07:19.917 [2024-12-09T14:00:21.712Z] Total : 24099.73 94.14 0.00 0.00 5308.37 2449.80 9924.02 00:07:19.917 { 00:07:19.917 "results": [ 00:07:19.917 { 00:07:19.917 "job": "Nvme0n1", 00:07:19.917 "core_mask": "0x2", 00:07:19.917 "workload": "randwrite", 00:07:19.917 "status": "finished", 00:07:19.917 "queue_depth": 128, 00:07:19.917 "io_size": 4096, 00:07:19.917 "runtime": 10.001358, 00:07:19.917 "iops": 24099.727257038496, 00:07:19.917 "mibps": 94.13955959780662, 00:07:19.917 "io_failed": 0, 00:07:19.917 "io_timeout": 0, 00:07:19.917 "avg_latency_us": 5308.371460229214, 00:07:19.917 "min_latency_us": 2449.7980952380954, 00:07:19.917 "max_latency_us": 9924.022857142858 00:07:19.917 } 00:07:19.917 ], 00:07:19.917 "core_count": 1 00:07:19.917 } 00:07:19.917 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1285604 00:07:19.917 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1285604 ']' 00:07:19.917 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1285604 00:07:19.917 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:19.917 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1285604 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1285604' 00:07:20.176 killing process with pid 1285604 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1285604 00:07:20.176 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.176 00:07:20.176 Latency(us) 00:07:20.176 [2024-12-09T14:00:21.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.176 [2024-12-09T14:00:21.971Z] =================================================================================================================== 00:07:20.176 [2024-12-09T14:00:21.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1285604 00:07:20.176 15:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.434 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.692 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:20.692 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1281949 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1281949 00:07:20.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1281949 Killed "${NVMF_APP[@]}" "$@" 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1287445 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1287445 00:07:20.950 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1287445 ']' 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.951 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.951 [2024-12-09 15:00:22.613028] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:20.951 [2024-12-09 15:00:22.613075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.951 [2024-12-09 15:00:22.691324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.951 [2024-12-09 15:00:22.730190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.951 [2024-12-09 15:00:22.730231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.951 [2024-12-09 15:00:22.730238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.951 [2024-12-09 15:00:22.730244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.951 [2024-12-09 15:00:22.730249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.951 [2024-12-09 15:00:22.730793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.209 15:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.467 [2024-12-09 15:00:23.036596] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:21.467 [2024-12-09 15:00:23.036696] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:21.467 [2024-12-09 15:00:23.036722] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:21.467 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.468 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.468 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:21.468 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1172c427-86cb-4abc-9aff-c700c54f85c2 -t 2000 00:07:21.726 [ 00:07:21.726 { 00:07:21.726 "name": "1172c427-86cb-4abc-9aff-c700c54f85c2", 00:07:21.726 "aliases": [ 00:07:21.726 "lvs/lvol" 00:07:21.726 ], 00:07:21.726 "product_name": "Logical Volume", 00:07:21.726 "block_size": 4096, 00:07:21.726 "num_blocks": 38912, 00:07:21.726 "uuid": "1172c427-86cb-4abc-9aff-c700c54f85c2", 00:07:21.726 "assigned_rate_limits": { 00:07:21.726 "rw_ios_per_sec": 0, 00:07:21.726 "rw_mbytes_per_sec": 0, 00:07:21.726 "r_mbytes_per_sec": 0, 00:07:21.726 "w_mbytes_per_sec": 0 00:07:21.726 }, 00:07:21.726 "claimed": false, 00:07:21.726 "zoned": false, 00:07:21.726 "supported_io_types": { 00:07:21.726 "read": true, 00:07:21.726 "write": true, 00:07:21.726 "unmap": true, 00:07:21.726 "flush": false, 00:07:21.726 "reset": true, 00:07:21.726 "nvme_admin": false, 00:07:21.726 "nvme_io": false, 00:07:21.726 "nvme_io_md": false, 00:07:21.726 "write_zeroes": true, 00:07:21.726 "zcopy": false, 00:07:21.726 "get_zone_info": false, 00:07:21.726 "zone_management": false, 00:07:21.726 "zone_append": false, 00:07:21.726 "compare": false, 00:07:21.726 "compare_and_write": false, 00:07:21.726 "abort": false, 00:07:21.726 "seek_hole": true, 00:07:21.726 "seek_data": true, 00:07:21.726 "copy": false, 00:07:21.726 "nvme_iov_md": false 00:07:21.726 }, 00:07:21.726 "driver_specific": { 00:07:21.726 "lvol": { 00:07:21.726 "lvol_store_uuid": "376bd9b3-a7a5-4297-804c-ebc8819a488a", 00:07:21.726 "base_bdev": "aio_bdev", 00:07:21.726 "thin_provision": false, 00:07:21.726 "num_allocated_clusters": 38, 00:07:21.726 "snapshot": false, 00:07:21.726 "clone": false, 00:07:21.726 "esnap_clone": false 00:07:21.726 } 00:07:21.726 } 00:07:21.726 } 00:07:21.726 ] 00:07:21.726 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:21.726 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:21.726 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:21.984 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:21.984 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:21.984 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:22.243 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:22.243 15:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.243 [2024-12-09 15:00:23.965554] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.243 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:22.502 request: 00:07:22.502 { 00:07:22.502 "uuid": "376bd9b3-a7a5-4297-804c-ebc8819a488a", 00:07:22.502 "method": "bdev_lvol_get_lvstores", 00:07:22.502 "req_id": 1 00:07:22.502 } 00:07:22.502 Got JSON-RPC error response 00:07:22.502 response: 00:07:22.502 { 00:07:22.502 "code": -19, 00:07:22.502 "message": "No such device" 00:07:22.502 } 00:07:22.502 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:22.502 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.502 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.502 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.502 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.761 aio_bdev 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.761 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.020 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1172c427-86cb-4abc-9aff-c700c54f85c2 -t 2000 00:07:23.020 [ 00:07:23.020 { 00:07:23.020 "name": "1172c427-86cb-4abc-9aff-c700c54f85c2", 00:07:23.020 "aliases": [ 00:07:23.020 "lvs/lvol" 00:07:23.020 ], 00:07:23.020 "product_name": "Logical Volume", 00:07:23.020 "block_size": 4096, 00:07:23.020 "num_blocks": 38912, 00:07:23.020 "uuid": "1172c427-86cb-4abc-9aff-c700c54f85c2", 00:07:23.020 "assigned_rate_limits": { 00:07:23.020 "rw_ios_per_sec": 0, 00:07:23.020 "rw_mbytes_per_sec": 0, 00:07:23.020 "r_mbytes_per_sec": 0, 00:07:23.020 "w_mbytes_per_sec": 0 00:07:23.020 }, 00:07:23.020 "claimed": false, 00:07:23.020 "zoned": false, 00:07:23.020 "supported_io_types": { 00:07:23.020 "read": true, 00:07:23.020 "write": true, 00:07:23.020 "unmap": true, 00:07:23.020 "flush": false, 00:07:23.020 "reset": true, 00:07:23.020 "nvme_admin": false, 00:07:23.020 "nvme_io": false, 00:07:23.020 "nvme_io_md": false, 00:07:23.020 "write_zeroes": true, 00:07:23.020 "zcopy": false, 00:07:23.020 "get_zone_info": false, 00:07:23.020 "zone_management": false, 00:07:23.020 "zone_append": false, 00:07:23.020 "compare": false, 00:07:23.020 "compare_and_write": false, 00:07:23.020 "abort": false, 00:07:23.020 "seek_hole": true, 00:07:23.020 "seek_data": true, 00:07:23.020 "copy": false, 00:07:23.020 "nvme_iov_md": false 00:07:23.020 }, 00:07:23.020 "driver_specific": { 00:07:23.020 "lvol": { 00:07:23.020 "lvol_store_uuid": "376bd9b3-a7a5-4297-804c-ebc8819a488a", 00:07:23.020 "base_bdev": "aio_bdev", 00:07:23.020 "thin_provision": false, 00:07:23.020 "num_allocated_clusters": 38, 00:07:23.020 "snapshot": false, 00:07:23.020 "clone": false, 00:07:23.020 "esnap_clone": false 00:07:23.020 } 00:07:23.020 } 00:07:23.020 } 00:07:23.020 ] 00:07:23.020 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:23.020 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:23.020 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.280 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.280 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:23.280 15:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.539 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.539 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1172c427-86cb-4abc-9aff-c700c54f85c2 00:07:23.539 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 376bd9b3-a7a5-4297-804c-ebc8819a488a 00:07:23.797 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.056 00:07:24.056 real 0m16.863s 00:07:24.056 user 0m43.882s 00:07:24.056 sys 0m3.602s 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.056 ************************************ 00:07:24.056 END TEST lvs_grow_dirty 00:07:24.056 ************************************ 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:24.056 nvmf_trace.0 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.056 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.315 rmmod nvme_tcp 00:07:24.315 rmmod nvme_fabrics 00:07:24.315 rmmod nvme_keyring 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1287445 ']' 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1287445 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1287445 ']' 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1287445 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:24.315 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287445 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287445' 00:07:24.316 killing process with pid 1287445 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1287445 00:07:24.316 15:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1287445 00:07:24.574 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.574 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.575 15:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.482 00:07:26.482 real 0m41.816s 00:07:26.482 user 1m4.823s 00:07:26.482 sys 0m9.985s 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.482 ************************************ 00:07:26.482 END TEST nvmf_lvs_grow 00:07:26.482 ************************************ 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.482 ************************************ 00:07:26.482 START TEST nvmf_bdev_io_wait 00:07:26.482 ************************************ 00:07:26.482 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:26.741 * Looking for test storage... 00:07:26.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:26.741 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.742 --rc genhtml_branch_coverage=1 00:07:26.742 --rc genhtml_function_coverage=1 00:07:26.742 --rc genhtml_legend=1 00:07:26.742 --rc geninfo_all_blocks=1 00:07:26.742 --rc geninfo_unexecuted_blocks=1 00:07:26.742 00:07:26.742 ' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.742 --rc genhtml_branch_coverage=1 00:07:26.742 --rc genhtml_function_coverage=1 00:07:26.742 --rc genhtml_legend=1 00:07:26.742 --rc geninfo_all_blocks=1 00:07:26.742 --rc geninfo_unexecuted_blocks=1 00:07:26.742 00:07:26.742 ' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.742 --rc genhtml_branch_coverage=1 00:07:26.742 --rc genhtml_function_coverage=1 00:07:26.742 --rc genhtml_legend=1 00:07:26.742 --rc geninfo_all_blocks=1 00:07:26.742 --rc geninfo_unexecuted_blocks=1 00:07:26.742 00:07:26.742 ' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.742 --rc genhtml_branch_coverage=1 00:07:26.742 --rc genhtml_function_coverage=1 00:07:26.742 --rc genhtml_legend=1 00:07:26.742 --rc geninfo_all_blocks=1 00:07:26.742 --rc geninfo_unexecuted_blocks=1 00:07:26.742 00:07:26.742 ' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.742 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.743 15:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.312 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:33.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:33.313 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:33.313 Found net devices under 0000:af:00.0: cvl_0_0 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:33.313 Found net devices under 0000:af:00.1: cvl_0_1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:07:33.313 00:07:33.313 --- 10.0.0.2 ping statistics --- 00:07:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.313 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:07:33.313 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:33.313 00:07:33.313 --- 10.0.0.1 ping statistics --- 00:07:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.313 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1291672 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1291672 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1291672 ']' 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 [2024-12-09 15:00:34.519559] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:33.314 [2024-12-09 15:00:34.519608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.314 [2024-12-09 15:00:34.597408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.314 [2024-12-09 15:00:34.638434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.314 [2024-12-09 15:00:34.638471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.314 [2024-12-09 15:00:34.638477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.314 [2024-12-09 15:00:34.638484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.314 [2024-12-09 15:00:34.638489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.314 [2024-12-09 15:00:34.640021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.314 [2024-12-09 15:00:34.640054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.314 [2024-12-09 15:00:34.640166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.314 [2024-12-09 15:00:34.640175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 [2024-12-09 15:00:34.784099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 Malloc0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.314 [2024-12-09 15:00:34.839364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1291694 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1291696 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.314 { 00:07:33.314 "params": { 00:07:33.314 "name": "Nvme$subsystem", 00:07:33.314 "trtype": "$TEST_TRANSPORT", 00:07:33.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.314 "adrfam": "ipv4", 00:07:33.314 "trsvcid": "$NVMF_PORT", 00:07:33.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.314 "hdgst": ${hdgst:-false}, 00:07:33.314 "ddgst": ${ddgst:-false} 00:07:33.314 }, 00:07:33.314 "method": "bdev_nvme_attach_controller" 00:07:33.314 } 00:07:33.314 EOF 00:07:33.314 )") 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1291698 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.314 { 00:07:33.314 "params": { 00:07:33.314 "name": "Nvme$subsystem", 00:07:33.314 "trtype": "$TEST_TRANSPORT", 00:07:33.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.314 "adrfam": "ipv4", 00:07:33.314 "trsvcid": "$NVMF_PORT", 00:07:33.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.314 "hdgst": ${hdgst:-false}, 00:07:33.314 "ddgst": ${ddgst:-false} 00:07:33.314 }, 00:07:33.314 "method": "bdev_nvme_attach_controller" 00:07:33.314 } 00:07:33.314 EOF 00:07:33.314 )") 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1291701 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:33.314 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.315 { 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme$subsystem", 00:07:33.315 "trtype": "$TEST_TRANSPORT", 00:07:33.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "$NVMF_PORT", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.315 "hdgst": ${hdgst:-false}, 00:07:33.315 "ddgst": ${ddgst:-false} 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 } 00:07:33.315 EOF 00:07:33.315 )") 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:33.315 { 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme$subsystem", 00:07:33.315 "trtype": "$TEST_TRANSPORT", 00:07:33.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "$NVMF_PORT", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:33.315 "hdgst": ${hdgst:-false}, 00:07:33.315 "ddgst": ${ddgst:-false} 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 } 00:07:33.315 EOF 00:07:33.315 )") 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1291694 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme1", 00:07:33.315 "trtype": "tcp", 00:07:33.315 "traddr": "10.0.0.2", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "4420", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.315 "hdgst": false, 00:07:33.315 "ddgst": false 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 }' 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme1", 00:07:33.315 "trtype": "tcp", 00:07:33.315 "traddr": "10.0.0.2", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "4420", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.315 "hdgst": false, 00:07:33.315 "ddgst": false 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 }' 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme1", 00:07:33.315 "trtype": "tcp", 00:07:33.315 "traddr": "10.0.0.2", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "4420", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.315 "hdgst": false, 00:07:33.315 "ddgst": false 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 }' 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:33.315 15:00:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:33.315 "params": { 00:07:33.315 "name": "Nvme1", 00:07:33.315 "trtype": "tcp", 00:07:33.315 "traddr": "10.0.0.2", 00:07:33.315 "adrfam": "ipv4", 00:07:33.315 "trsvcid": "4420", 00:07:33.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:33.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:33.315 "hdgst": false, 00:07:33.315 "ddgst": false 00:07:33.315 }, 00:07:33.315 "method": "bdev_nvme_attach_controller" 00:07:33.315 }' 00:07:33.315 [2024-12-09 15:00:34.891810] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:33.315 [2024-12-09 15:00:34.891812] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:33.315 [2024-12-09 15:00:34.891861] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 15:00:34.891862] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:33.315 --proc-type=auto ] 00:07:33.315 [2024-12-09 15:00:34.892104] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:33.315 [2024-12-09 15:00:34.892140] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:33.315 [2024-12-09 15:00:34.894301] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:33.315 [2024-12-09 15:00:34.894348] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:33.315 [2024-12-09 15:00:35.080220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.573 [2024-12-09 15:00:35.125457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:33.573 [2024-12-09 15:00:35.177718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.573 [2024-12-09 15:00:35.222334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.573 [2024-12-09 15:00:35.270605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.573 [2024-12-09 15:00:35.316996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:33.573 [2024-12-09 15:00:35.331412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.830 [2024-12-09 15:00:35.373158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:33.830 Running I/O for 1 seconds... 00:07:33.830 Running I/O for 1 seconds... 00:07:33.830 Running I/O for 1 seconds... 00:07:34.088 Running I/O for 1 seconds... 00:07:35.022 7830.00 IOPS, 30.59 MiB/s 00:07:35.022 Latency(us) 00:07:35.022 [2024-12-09T14:00:36.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.022 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:35.022 Nvme1n1 : 1.02 7814.74 30.53 0.00 0.00 16257.95 6366.35 27587.54 00:07:35.022 [2024-12-09T14:00:36.817Z] =================================================================================================================== 00:07:35.022 [2024-12-09T14:00:36.817Z] Total : 7814.74 30.53 0.00 0.00 16257.95 6366.35 27587.54 00:07:35.022 12239.00 IOPS, 47.81 MiB/s 00:07:35.022 Latency(us) 00:07:35.022 [2024-12-09T14:00:36.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.022 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:35.022 Nvme1n1 : 1.01 12298.01 48.04 0.00 0.00 10375.12 4712.35 19972.88 00:07:35.022 [2024-12-09T14:00:36.817Z] =================================================================================================================== 00:07:35.022 [2024-12-09T14:00:36.817Z] Total : 12298.01 48.04 0.00 0.00 10375.12 4712.35 19972.88 00:07:35.022 7910.00 IOPS, 30.90 MiB/s 00:07:35.022 Latency(us) 00:07:35.022 [2024-12-09T14:00:36.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.022 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:35.022 Nvme1n1 : 1.00 8021.12 31.33 0.00 0.00 15924.10 2808.69 39446.43 00:07:35.022 [2024-12-09T14:00:36.817Z] =================================================================================================================== 00:07:35.022 [2024-12-09T14:00:36.817Z] Total : 8021.12 31.33 0.00 0.00 15924.10 2808.69 39446.43 00:07:35.022 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1291696 00:07:35.022 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1291698 00:07:35.022 244400.00 IOPS, 954.69 MiB/s 00:07:35.022 Latency(us) 00:07:35.022 [2024-12-09T14:00:36.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.022 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:35.022 Nvme1n1 : 1.00 244030.24 953.24 0.00 0.00 521.96 220.40 1490.16 00:07:35.022 [2024-12-09T14:00:36.817Z] =================================================================================================================== 00:07:35.022 [2024-12-09T14:00:36.817Z] Total : 244030.24 953.24 0.00 0.00 521.96 220.40 1490.16 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1291701 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.281 rmmod nvme_tcp 00:07:35.281 rmmod nvme_fabrics 00:07:35.281 rmmod nvme_keyring 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1291672 ']' 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1291672 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1291672 ']' 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1291672 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1291672 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1291672' 00:07:35.281 killing process with pid 1291672 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1291672 00:07:35.281 15:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1291672 00:07:35.540 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:35.540 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:35.540 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:35.540 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:35.540 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.541 15:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:37.445 00:07:37.445 real 0m10.901s 00:07:37.445 user 0m16.758s 00:07:37.445 sys 0m6.146s 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.445 ************************************ 00:07:37.445 END TEST nvmf_bdev_io_wait 00:07:37.445 ************************************ 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.445 15:00:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.705 ************************************ 00:07:37.705 START TEST nvmf_queue_depth 00:07:37.705 ************************************ 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:37.705 * Looking for test storage... 00:07:37.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.705 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:37.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.706 --rc genhtml_branch_coverage=1 00:07:37.706 --rc genhtml_function_coverage=1 00:07:37.706 --rc genhtml_legend=1 00:07:37.706 --rc geninfo_all_blocks=1 00:07:37.706 --rc geninfo_unexecuted_blocks=1 00:07:37.706 00:07:37.706 ' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:37.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.706 --rc genhtml_branch_coverage=1 00:07:37.706 --rc genhtml_function_coverage=1 00:07:37.706 --rc genhtml_legend=1 00:07:37.706 --rc geninfo_all_blocks=1 00:07:37.706 --rc geninfo_unexecuted_blocks=1 00:07:37.706 00:07:37.706 ' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:37.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.706 --rc genhtml_branch_coverage=1 00:07:37.706 --rc genhtml_function_coverage=1 00:07:37.706 --rc genhtml_legend=1 00:07:37.706 --rc geninfo_all_blocks=1 00:07:37.706 --rc geninfo_unexecuted_blocks=1 00:07:37.706 00:07:37.706 ' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:37.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.706 --rc genhtml_branch_coverage=1 00:07:37.706 --rc genhtml_function_coverage=1 00:07:37.706 --rc genhtml_legend=1 00:07:37.706 --rc geninfo_all_blocks=1 00:07:37.706 --rc geninfo_unexecuted_blocks=1 00:07:37.706 00:07:37.706 ' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.706 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.707 15:00:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:44.444 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:44.444 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:44.444 Found net devices under 0000:af:00.0: cvl_0_0 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:44.444 Found net devices under 0000:af:00.1: cvl_0_1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:07:44.444 00:07:44.444 --- 10.0.0.2 ping statistics --- 00:07:44.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.444 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:07:44.444 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:07:44.444 00:07:44.444 --- 10.0.0.1 ping statistics --- 00:07:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.445 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1295682 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1295682 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1295682 ']' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 [2024-12-09 15:00:45.530928] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:44.445 [2024-12-09 15:00:45.530972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.445 [2024-12-09 15:00:45.611731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.445 [2024-12-09 15:00:45.649300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.445 [2024-12-09 15:00:45.649335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.445 [2024-12-09 15:00:45.649342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.445 [2024-12-09 15:00:45.649349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.445 [2024-12-09 15:00:45.649354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.445 [2024-12-09 15:00:45.649863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 [2024-12-09 15:00:45.798271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 Malloc0 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 [2024-12-09 15:00:45.848454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1295712 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1295712 /var/tmp/bdevperf.sock 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1295712 ']' 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.445 15:00:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 [2024-12-09 15:00:45.899839] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:07:44.445 [2024-12-09 15:00:45.899881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295712 ] 00:07:44.445 [2024-12-09 15:00:45.973387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.445 [2024-12-09 15:00:46.013306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.445 NVMe0n1 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.445 15:00:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.704 Running I/O for 10 seconds... 00:07:46.575 12288.00 IOPS, 48.00 MiB/s [2024-12-09T14:00:49.747Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-09T14:00:50.682Z] 12295.33 IOPS, 48.03 MiB/s [2024-12-09T14:00:51.617Z] 12338.75 IOPS, 48.20 MiB/s [2024-12-09T14:00:52.553Z] 12392.60 IOPS, 48.41 MiB/s [2024-12-09T14:00:53.489Z] 12440.00 IOPS, 48.59 MiB/s [2024-12-09T14:00:54.426Z] 12432.43 IOPS, 48.56 MiB/s [2024-12-09T14:00:55.363Z] 12504.12 IOPS, 48.84 MiB/s [2024-12-09T14:00:56.740Z] 12523.11 IOPS, 48.92 MiB/s [2024-12-09T14:00:56.740Z] 12550.70 IOPS, 49.03 MiB/s 00:07:54.945 Latency(us) 00:07:54.945 [2024-12-09T14:00:56.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.945 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:54.945 Verification LBA range: start 0x0 length 0x4000 00:07:54.945 NVMe0n1 : 10.06 12571.56 49.11 0.00 0.00 81159.60 18849.40 55424.73 00:07:54.945 [2024-12-09T14:00:56.740Z] =================================================================================================================== 00:07:54.945 [2024-12-09T14:00:56.740Z] Total : 12571.56 49.11 0.00 0.00 81159.60 18849.40 55424.73 00:07:54.945 { 00:07:54.945 "results": [ 00:07:54.945 { 00:07:54.945 "job": "NVMe0n1", 00:07:54.945 "core_mask": "0x1", 00:07:54.945 "workload": "verify", 00:07:54.945 "status": "finished", 00:07:54.945 "verify_range": { 00:07:54.945 "start": 0, 00:07:54.945 "length": 16384 00:07:54.945 }, 00:07:54.945 "queue_depth": 1024, 00:07:54.945 "io_size": 4096, 00:07:54.945 "runtime": 10.06486, 00:07:54.945 "iops": 12571.560856286129, 00:07:54.945 "mibps": 49.10765959486769, 00:07:54.945 "io_failed": 0, 00:07:54.945 "io_timeout": 0, 00:07:54.945 "avg_latency_us": 81159.60239218622, 00:07:54.945 "min_latency_us": 18849.401904761904, 00:07:54.945 "max_latency_us": 55424.73142857143 00:07:54.945 } 00:07:54.945 ], 00:07:54.945 "core_count": 1 00:07:54.945 } 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1295712 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1295712 ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1295712 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295712 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295712' 00:07:54.945 killing process with pid 1295712 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1295712 00:07:54.945 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.945 00:07:54.945 Latency(us) 00:07:54.945 [2024-12-09T14:00:56.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.945 [2024-12-09T14:00:56.740Z] =================================================================================================================== 00:07:54.945 [2024-12-09T14:00:56.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1295712 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.945 rmmod nvme_tcp 00:07:54.945 rmmod nvme_fabrics 00:07:54.945 rmmod nvme_keyring 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1295682 ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1295682 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1295682 ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1295682 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.945 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295682 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295682' 00:07:55.204 killing process with pid 1295682 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1295682 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1295682 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.204 15:00:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.742 00:07:57.742 real 0m19.771s 00:07:57.742 user 0m23.060s 00:07:57.742 sys 0m6.107s 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.742 ************************************ 00:07:57.742 END TEST nvmf_queue_depth 00:07:57.742 ************************************ 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.742 ************************************ 00:07:57.742 START TEST nvmf_target_multipath 00:07:57.742 ************************************ 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:57.742 * Looking for test storage... 00:07:57.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.742 --rc genhtml_branch_coverage=1 00:07:57.742 --rc genhtml_function_coverage=1 00:07:57.742 --rc genhtml_legend=1 00:07:57.742 --rc geninfo_all_blocks=1 00:07:57.742 --rc geninfo_unexecuted_blocks=1 00:07:57.742 00:07:57.742 ' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.742 --rc genhtml_branch_coverage=1 00:07:57.742 --rc genhtml_function_coverage=1 00:07:57.742 --rc genhtml_legend=1 00:07:57.742 --rc geninfo_all_blocks=1 00:07:57.742 --rc geninfo_unexecuted_blocks=1 00:07:57.742 00:07:57.742 ' 00:07:57.742 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.742 --rc genhtml_branch_coverage=1 00:07:57.742 --rc genhtml_function_coverage=1 00:07:57.742 --rc genhtml_legend=1 00:07:57.742 --rc geninfo_all_blocks=1 00:07:57.742 --rc geninfo_unexecuted_blocks=1 00:07:57.742 00:07:57.743 ' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.743 --rc genhtml_branch_coverage=1 00:07:57.743 --rc genhtml_function_coverage=1 00:07:57.743 --rc genhtml_legend=1 00:07:57.743 --rc geninfo_all_blocks=1 00:07:57.743 --rc geninfo_unexecuted_blocks=1 00:07:57.743 00:07:57.743 ' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:57.743 15:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:04.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:04.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:04.316 Found net devices under 0000:af:00.0: cvl_0_0 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:04.316 Found net devices under 0000:af:00.1: cvl_0_1 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.316 15:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.316 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:08:04.317 00:08:04.317 --- 10.0.0.2 ping statistics --- 00:08:04.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.317 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:04.317 00:08:04.317 --- 10.0.0.1 ping statistics --- 00:08:04.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.317 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:04.317 only one NIC for nvmf test 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.317 rmmod nvme_tcp 00:08:04.317 rmmod nvme_fabrics 00:08:04.317 rmmod nvme_keyring 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.317 15:01:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.695 00:08:05.695 real 0m8.375s 00:08:05.695 user 0m1.857s 00:08:05.695 sys 0m4.491s 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.695 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:05.695 ************************************ 00:08:05.695 END TEST nvmf_target_multipath 00:08:05.695 ************************************ 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.955 ************************************ 00:08:05.955 START TEST nvmf_zcopy 00:08:05.955 ************************************ 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:05.955 * Looking for test storage... 00:08:05.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:05.955 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:05.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.956 --rc genhtml_branch_coverage=1 00:08:05.956 --rc genhtml_function_coverage=1 00:08:05.956 --rc genhtml_legend=1 00:08:05.956 --rc geninfo_all_blocks=1 00:08:05.956 --rc geninfo_unexecuted_blocks=1 00:08:05.956 00:08:05.956 ' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:05.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.956 --rc genhtml_branch_coverage=1 00:08:05.956 --rc genhtml_function_coverage=1 00:08:05.956 --rc genhtml_legend=1 00:08:05.956 --rc geninfo_all_blocks=1 00:08:05.956 --rc geninfo_unexecuted_blocks=1 00:08:05.956 00:08:05.956 ' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:05.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.956 --rc genhtml_branch_coverage=1 00:08:05.956 --rc genhtml_function_coverage=1 00:08:05.956 --rc genhtml_legend=1 00:08:05.956 --rc geninfo_all_blocks=1 00:08:05.956 --rc geninfo_unexecuted_blocks=1 00:08:05.956 00:08:05.956 ' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:05.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.956 --rc genhtml_branch_coverage=1 00:08:05.956 --rc genhtml_function_coverage=1 00:08:05.956 --rc genhtml_legend=1 00:08:05.956 --rc geninfo_all_blocks=1 00:08:05.956 --rc geninfo_unexecuted_blocks=1 00:08:05.956 00:08:05.956 ' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.956 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.216 15:01:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:12.786 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:12.786 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:12.786 Found net devices under 0000:af:00.0: cvl_0_0 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:12.786 Found net devices under 0000:af:00.1: cvl_0_1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:08:12.786 00:08:12.786 --- 10.0.0.2 ping statistics --- 00:08:12.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.786 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:08:12.786 00:08:12.786 --- 10.0.0.1 ping statistics --- 00:08:12.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.786 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.786 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1304526 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1304526 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1304526 ']' 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 [2024-12-09 15:01:13.775874] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:12.787 [2024-12-09 15:01:13.775916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.787 [2024-12-09 15:01:13.852515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.787 [2024-12-09 15:01:13.892198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.787 [2024-12-09 15:01:13.892238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.787 [2024-12-09 15:01:13.892246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.787 [2024-12-09 15:01:13.892252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.787 [2024-12-09 15:01:13.892257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.787 [2024-12-09 15:01:13.892784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:12.787 15:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 [2024-12-09 15:01:14.041231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 [2024-12-09 15:01:14.061420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 malloc0 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.787 { 00:08:12.787 "params": { 00:08:12.787 "name": "Nvme$subsystem", 00:08:12.787 "trtype": "$TEST_TRANSPORT", 00:08:12.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.787 "adrfam": "ipv4", 00:08:12.787 "trsvcid": "$NVMF_PORT", 00:08:12.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.787 "hdgst": ${hdgst:-false}, 00:08:12.787 "ddgst": ${ddgst:-false} 00:08:12.787 }, 00:08:12.787 "method": "bdev_nvme_attach_controller" 00:08:12.787 } 00:08:12.787 EOF 00:08:12.787 )") 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:12.787 15:01:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.787 "params": { 00:08:12.787 "name": "Nvme1", 00:08:12.787 "trtype": "tcp", 00:08:12.787 "traddr": "10.0.0.2", 00:08:12.787 "adrfam": "ipv4", 00:08:12.787 "trsvcid": "4420", 00:08:12.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.787 "hdgst": false, 00:08:12.787 "ddgst": false 00:08:12.787 }, 00:08:12.787 "method": "bdev_nvme_attach_controller" 00:08:12.787 }' 00:08:12.787 [2024-12-09 15:01:14.148411] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:12.787 [2024-12-09 15:01:14.148459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304657 ] 00:08:12.787 [2024-12-09 15:01:14.222393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.787 [2024-12-09 15:01:14.261923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.046 Running I/O for 10 seconds... 00:08:14.916 8801.00 IOPS, 68.76 MiB/s [2024-12-09T14:01:17.646Z] 8847.50 IOPS, 69.12 MiB/s [2024-12-09T14:01:19.022Z] 8869.67 IOPS, 69.29 MiB/s [2024-12-09T14:01:19.959Z] 8883.50 IOPS, 69.40 MiB/s [2024-12-09T14:01:20.895Z] 8891.20 IOPS, 69.46 MiB/s [2024-12-09T14:01:21.830Z] 8897.83 IOPS, 69.51 MiB/s [2024-12-09T14:01:22.766Z] 8901.57 IOPS, 69.54 MiB/s [2024-12-09T14:01:23.702Z] 8900.62 IOPS, 69.54 MiB/s [2024-12-09T14:01:24.638Z] 8908.67 IOPS, 69.60 MiB/s [2024-12-09T14:01:24.638Z] 8910.00 IOPS, 69.61 MiB/s 00:08:22.843 Latency(us) 00:08:22.843 [2024-12-09T14:01:24.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:22.843 Verification LBA range: start 0x0 length 0x1000 00:08:22.843 Nvme1n1 : 10.01 8910.12 69.61 0.00 0.00 14324.23 1646.20 23468.13 00:08:22.843 [2024-12-09T14:01:24.638Z] =================================================================================================================== 00:08:22.843 [2024-12-09T14:01:24.638Z] Total : 8910.12 69.61 0.00 0.00 14324.23 1646.20 23468.13 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1306359 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.102 { 00:08:23.102 "params": { 00:08:23.102 "name": "Nvme$subsystem", 00:08:23.102 "trtype": "$TEST_TRANSPORT", 00:08:23.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.102 "adrfam": "ipv4", 00:08:23.102 "trsvcid": "$NVMF_PORT", 00:08:23.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.102 "hdgst": ${hdgst:-false}, 00:08:23.102 "ddgst": ${ddgst:-false} 00:08:23.102 }, 00:08:23.102 "method": "bdev_nvme_attach_controller" 00:08:23.102 } 00:08:23.102 EOF 00:08:23.102 )") 00:08:23.102 [2024-12-09 15:01:24.780300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.780334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:23.102 15:01:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.102 "params": { 00:08:23.102 "name": "Nvme1", 00:08:23.102 "trtype": "tcp", 00:08:23.102 "traddr": "10.0.0.2", 00:08:23.102 "adrfam": "ipv4", 00:08:23.102 "trsvcid": "4420", 00:08:23.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.102 "hdgst": false, 00:08:23.102 "ddgst": false 00:08:23.102 }, 00:08:23.102 "method": "bdev_nvme_attach_controller" 00:08:23.102 }' 00:08:23.102 [2024-12-09 15:01:24.792281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.792294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.804306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.804316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.816345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.816360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.823101] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:23.102 [2024-12-09 15:01:24.823141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306359 ] 00:08:23.102 [2024-12-09 15:01:24.828369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.828380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.840400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.840410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.852431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.852440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.864463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.864473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.876495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.876505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.102 [2024-12-09 15:01:24.888529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.102 [2024-12-09 15:01:24.888540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.897675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.361 [2024-12-09 15:01:24.900562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.900571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.912616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.912633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.924624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.924638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.936656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.936670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.937835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.361 [2024-12-09 15:01:24.948696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.948712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.960728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.960746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.972753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.972768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.984786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.984799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:24.996818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:24.996830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:25.008846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:25.008857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.361 [2024-12-09 15:01:25.020888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.361 [2024-12-09 15:01:25.020898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.032928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.032949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.044957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.044975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.056987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.057002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.069017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.069031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.081047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.081058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.093076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.093086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.105122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.105133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.117158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.117170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.129185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.129195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.141225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.141235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.362 [2024-12-09 15:01:25.153260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.362 [2024-12-09 15:01:25.153275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.165282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.165292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.177316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.177326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.189349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.189359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.201383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.201394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.213420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.213438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 Running I/O for 5 seconds... 00:08:23.620 [2024-12-09 15:01:25.225444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.225454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.241362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.241389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.254945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.254964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.268985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.269005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.283156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.283180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.294129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.294150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.308140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.308160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.321977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.321997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.332976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.332995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.347673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.347692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.358534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.358552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.372941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.372960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.386681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.386700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.400019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.400038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.620 [2024-12-09 15:01:25.413622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.620 [2024-12-09 15:01:25.413640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.427429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.427448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.440946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.440964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.455162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.455181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.465444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.465462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.479812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.479831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.490398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.490417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.504425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.504445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.517698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.517718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.531286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.531305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.544759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.544778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.558502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.558522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.572127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.572146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.585837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.585855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.599453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.599471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.613357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.613375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.627111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.627130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.640929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.878 [2024-12-09 15:01:25.640947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.878 [2024-12-09 15:01:25.654685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.879 [2024-12-09 15:01:25.654704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.879 [2024-12-09 15:01:25.668741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.879 [2024-12-09 15:01:25.668759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.682426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.682445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.695735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.695753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.709321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.709344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.723321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.723340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.737004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.737022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.750699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.750718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.764441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.764459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.778840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.778858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.794514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.794532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.808244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.808264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.822471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.822489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.835860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.835878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.137 [2024-12-09 15:01:25.849612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.137 [2024-12-09 15:01:25.849634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.863424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.863442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.876399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.876418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.889903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.889923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.903648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.903674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.917494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.917514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.138 [2024-12-09 15:01:25.930973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.138 [2024-12-09 15:01:25.930992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:25.944592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:25.944611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:25.958007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:25.958027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:25.971626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:25.971646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:25.985007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:25.985026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:25.998887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:25.998906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.012595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.012613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.025936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.025954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.039640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.039659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.053126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.053144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.067060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.067080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.080967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.080986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.094537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.094555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.108266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.108285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.122208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.122236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.135729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.135750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.149466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.149484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.163063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.163086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.176441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.176459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.396 [2024-12-09 15:01:26.190092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.396 [2024-12-09 15:01:26.190110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.203973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.203991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.217533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.217551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 16982.00 IOPS, 132.67 MiB/s [2024-12-09T14:01:26.450Z] [2024-12-09 15:01:26.231111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.231129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.244833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.244851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.258299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.258318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.271872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.271891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.285755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.285773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.299595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.299614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.313341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.313361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.326959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.326979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.341504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.341525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.352518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.352537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.366474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.655 [2024-12-09 15:01:26.366494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.655 [2024-12-09 15:01:26.380062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.380081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.656 [2024-12-09 15:01:26.393707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.393727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.656 [2024-12-09 15:01:26.407384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.407403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.656 [2024-12-09 15:01:26.421121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.421147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.656 [2024-12-09 15:01:26.435195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.435215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.656 [2024-12-09 15:01:26.448652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.656 [2024-12-09 15:01:26.448672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.914 [2024-12-09 15:01:26.462071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.914 [2024-12-09 15:01:26.462090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.476043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.476062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.489782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.489801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.503071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.503090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.516681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.516701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.529988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.530008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.543747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.543765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.557414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.557434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.570892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.570912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.584529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.584551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.598224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.598244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.611874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.611894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.625885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.625905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.639670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.639689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.653280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.653299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.667107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.667126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.680995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.681014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.915 [2024-12-09 15:01:26.694913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.915 [2024-12-09 15:01:26.694932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.709162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.709181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.722888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.722907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.736251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.736269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.749856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.749874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.763373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.763392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.776752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.776770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.790647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.790666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.804064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.804083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.817864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.817883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.831590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.831608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.844901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.844925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.858370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.858389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.871536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.871554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.884981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.885000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.898348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.898367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.911874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.911893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.925432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.925451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.939275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.939294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.952686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.952709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.174 [2024-12-09 15:01:26.966486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.174 [2024-12-09 15:01:26.966505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:26.980616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:26.980636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:26.990775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:26.990795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.004723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.004747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.018165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.018184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.031504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.031523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.045233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.045252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.058960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.058978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.072788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.072807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.086791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.086812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.101346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.101367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.112406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.112425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.126352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.126371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.139845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.139863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.153500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.153518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.166816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.166835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.180442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.180460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.193942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.193961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.207371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.207390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.433 [2024-12-09 15:01:27.220980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.433 [2024-12-09 15:01:27.220999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 17076.00 IOPS, 133.41 MiB/s [2024-12-09T14:01:27.487Z] [2024-12-09 15:01:27.234496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.234515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.247966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.247985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.261279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.261297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.275118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.275136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.288657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.288675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.302076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.302097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.315927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.315946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.329720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.329739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.343454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.343472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.357045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.357065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.370682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.370701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.384526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.692 [2024-12-09 15:01:27.384544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.692 [2024-12-09 15:01:27.398234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.398252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.411807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.411826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.425014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.425033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.439048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.439072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.452669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.452687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.466616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.466635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.693 [2024-12-09 15:01:27.480084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.693 [2024-12-09 15:01:27.480103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.493711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.493730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.507583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.507601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.520884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.520903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.534770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.534789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.548360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.548379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.561988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.562006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.575298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.575316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.589345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.589365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.603015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.603035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.616778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.616798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.630259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.630278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.643796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.643816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.657660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.657678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.671006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.671025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.684685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.684703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.698043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.698068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.711385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.711404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.724944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.724964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.952 [2024-12-09 15:01:27.738770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.952 [2024-12-09 15:01:27.738790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.752746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.752765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.766633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.766652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.780324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.780344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.794023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.794047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.807906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.807924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.822598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.822617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.837618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.837638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.851321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.851339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.865018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.865037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.878585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.878604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.892725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.892744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.906897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.906916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.917954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.917974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.933294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.933313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.947925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.947944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.962113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.962136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.975421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.975439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:27.989109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:27.989128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.212 [2024-12-09 15:01:28.002861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.212 [2024-12-09 15:01:28.002880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.016348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.016367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.030032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.030051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.043534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.043554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.057349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.057368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.071158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.071177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.084715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.084735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.098172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.098190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.111486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.111505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.124904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.124924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.138711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.138731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.152821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.152839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.166511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.166529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.179778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.179796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.193666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.193685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.207036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.207055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.220824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.220842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 17091.00 IOPS, 133.52 MiB/s [2024-12-09T14:01:28.266Z] [2024-12-09 15:01:28.234386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.234405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.247970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.247989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.471 [2024-12-09 15:01:28.261453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.471 [2024-12-09 15:01:28.261471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.275283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.275302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.289151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.289170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.303027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.303045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.316469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.316489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.330474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.330492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.343657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.343676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.357730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.357749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.371330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.371348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.385084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.385102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.398846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.398866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.412871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.412890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.426873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.426891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.440625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.440643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.454581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.454599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.468025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.730 [2024-12-09 15:01:28.468043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.730 [2024-12-09 15:01:28.482023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.731 [2024-12-09 15:01:28.482042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.731 [2024-12-09 15:01:28.495714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.731 [2024-12-09 15:01:28.495732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.731 [2024-12-09 15:01:28.509477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.731 [2024-12-09 15:01:28.509496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.731 [2024-12-09 15:01:28.523505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.731 [2024-12-09 15:01:28.523524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.536891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.536908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.550788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.550806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.564487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.564505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.577667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.577685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.591518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.591536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.605302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.605320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.618645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.618664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.632551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.632571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.646201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.646225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.989 [2024-12-09 15:01:28.660370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.989 [2024-12-09 15:01:28.660390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.670885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.670904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.684561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.684580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.698064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.698082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.711740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.711759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.725375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.725394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.739164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.739183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.752467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.752486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.766163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.766182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.990 [2024-12-09 15:01:28.779640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.990 [2024-12-09 15:01:28.779658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.793323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.793342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.806813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.806831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.820164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.820183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.834044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.834064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.847811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.847829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.861230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.861248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.874944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.874962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.888963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.888982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.902237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.902255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.915965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.915984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.929583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.929602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.942978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.942996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.956596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.956615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.970398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.248 [2024-12-09 15:01:28.970417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.248 [2024-12-09 15:01:28.983773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.249 [2024-12-09 15:01:28.983796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.249 [2024-12-09 15:01:28.996903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.249 [2024-12-09 15:01:28.996922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.249 [2024-12-09 15:01:29.010714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.249 [2024-12-09 15:01:29.010732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.249 [2024-12-09 15:01:29.024190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.249 [2024-12-09 15:01:29.024208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.249 [2024-12-09 15:01:29.037632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.249 [2024-12-09 15:01:29.037650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.050953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.050971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.064713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.064731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.078317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.078336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.092069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.092088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.105913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.105933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.119676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.119697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.133492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.133511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.146964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.146984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.160309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.160328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.174239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.174259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.188044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.188064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.507 [2024-12-09 15:01:29.201639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.507 [2024-12-09 15:01:29.201657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.215067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.215086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.229184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.229202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 17107.50 IOPS, 133.65 MiB/s [2024-12-09T14:01:29.303Z] [2024-12-09 15:01:29.240593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.240616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.254516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.254535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.267961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.267979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.281365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.281384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.508 [2024-12-09 15:01:29.294972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.508 [2024-12-09 15:01:29.294992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.308940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.308958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.322554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.322572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.336067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.336087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.349660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.349679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.363525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.363544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.377519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.377538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.390953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.390972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.404568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.404591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.418319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.418338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.431792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.431812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.445223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.445243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.458243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.458262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.471900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.471918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.485425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.485443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.499232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.499255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.513061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.513079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.526889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.526908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.540320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.540338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.767 [2024-12-09 15:01:29.553871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.767 [2024-12-09 15:01:29.553890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.567274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.567293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.581386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.581404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.595042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.595061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.608611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.608629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.622835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.622854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.636478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.636497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.650001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.650021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.663286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.663306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.677097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.677115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.691070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.691090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.704573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.704592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.718094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.718113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.732032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.732050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.745125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.745143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.758514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.758532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.772272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.772290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.785605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.785623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.799227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.799245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.026 [2024-12-09 15:01:29.812920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.026 [2024-12-09 15:01:29.812938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.826906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.826924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.840674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.840693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.853954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.853973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.867818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.867836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.881700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.881718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.895161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.895180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.909043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.909062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.922394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.922413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.935752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.935770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.949557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.949577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.963188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.963207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.977104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.977123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:29.990704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:29.990722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:30.004830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:30.004850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:30.019201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:30.019226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:30.034403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:30.034424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:30.048714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:30.048734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.285 [2024-12-09 15:01:30.059922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.285 [2024-12-09 15:01:30.059940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.286 [2024-12-09 15:01:30.073950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.286 [2024-12-09 15:01:30.073968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.088035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.088054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.102457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.102477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.113764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.113782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.128100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.128119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.141767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.141785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.155575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.155593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.169326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.169345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.183558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.183576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.193735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.193754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.208325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.208345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.222093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.222114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 17116.40 IOPS, 133.72 MiB/s [2024-12-09T14:01:30.339Z] [2024-12-09 15:01:30.235529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.235548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 00:08:28.544 Latency(us) 00:08:28.544 [2024-12-09T14:01:30.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.544 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:28.544 Nvme1n1 : 5.01 17120.93 133.76 0.00 0.00 7469.15 3620.08 17101.78 00:08:28.544 [2024-12-09T14:01:30.339Z] =================================================================================================================== 00:08:28.544 [2024-12-09T14:01:30.339Z] Total : 17120.93 133.76 0.00 0.00 7469.15 3620.08 17101.78 00:08:28.544 [2024-12-09 15:01:30.244649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.244667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.256673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.544 [2024-12-09 15:01:30.256688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.544 [2024-12-09 15:01:30.268715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.268735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.545 [2024-12-09 15:01:30.280741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.280758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.545 [2024-12-09 15:01:30.292771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.292784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.545 [2024-12-09 15:01:30.304799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.304812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.545 [2024-12-09 15:01:30.316837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.316849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.545 [2024-12-09 15:01:30.328870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.545 [2024-12-09 15:01:30.328887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 [2024-12-09 15:01:30.340902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.803 [2024-12-09 15:01:30.340914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 [2024-12-09 15:01:30.352929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.803 [2024-12-09 15:01:30.352941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 [2024-12-09 15:01:30.364960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.803 [2024-12-09 15:01:30.364969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 [2024-12-09 15:01:30.376995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.803 [2024-12-09 15:01:30.377009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 [2024-12-09 15:01:30.389025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.803 [2024-12-09 15:01:30.389034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1306359) - No such process 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1306359 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.803 delay0 00:08:28.803 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.804 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:28.804 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.804 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.804 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.804 15:01:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:28.804 [2024-12-09 15:01:30.540788] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:35.478 Initializing NVMe Controllers 00:08:35.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:35.478 Initialization complete. Launching workers. 00:08:35.478 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3263 00:08:35.478 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3526, failed to submit 57 00:08:35.478 success 3369, unsuccessful 157, failed 0 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.478 rmmod nvme_tcp 00:08:35.478 rmmod nvme_fabrics 00:08:35.478 rmmod nvme_keyring 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1304526 ']' 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1304526 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1304526 ']' 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1304526 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1304526 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1304526' 00:08:35.478 killing process with pid 1304526 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1304526 00:08:35.478 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1304526 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.759 15:01:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.664 00:08:37.664 real 0m31.855s 00:08:37.664 user 0m42.707s 00:08:37.664 sys 0m11.266s 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:37.664 ************************************ 00:08:37.664 END TEST nvmf_zcopy 00:08:37.664 ************************************ 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.664 15:01:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.924 ************************************ 00:08:37.924 START TEST nvmf_nmic 00:08:37.924 ************************************ 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:37.924 * Looking for test storage... 00:08:37.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.924 --rc genhtml_branch_coverage=1 00:08:37.924 --rc genhtml_function_coverage=1 00:08:37.924 --rc genhtml_legend=1 00:08:37.924 --rc geninfo_all_blocks=1 00:08:37.924 --rc geninfo_unexecuted_blocks=1 00:08:37.924 00:08:37.924 ' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.924 --rc genhtml_branch_coverage=1 00:08:37.924 --rc genhtml_function_coverage=1 00:08:37.924 --rc genhtml_legend=1 00:08:37.924 --rc geninfo_all_blocks=1 00:08:37.924 --rc geninfo_unexecuted_blocks=1 00:08:37.924 00:08:37.924 ' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.924 --rc genhtml_branch_coverage=1 00:08:37.924 --rc genhtml_function_coverage=1 00:08:37.924 --rc genhtml_legend=1 00:08:37.924 --rc geninfo_all_blocks=1 00:08:37.924 --rc geninfo_unexecuted_blocks=1 00:08:37.924 00:08:37.924 ' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.924 --rc genhtml_branch_coverage=1 00:08:37.924 --rc genhtml_function_coverage=1 00:08:37.924 --rc genhtml_legend=1 00:08:37.924 --rc geninfo_all_blocks=1 00:08:37.924 --rc geninfo_unexecuted_blocks=1 00:08:37.924 00:08:37.924 ' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.924 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.925 15:01:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:44.495 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:44.495 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:44.495 Found net devices under 0000:af:00.0: cvl_0_0 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.495 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:44.496 Found net devices under 0000:af:00.1: cvl_0_1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:08:44.496 00:08:44.496 --- 10.0.0.2 ping statistics --- 00:08:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.496 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:44.496 00:08:44.496 --- 10.0.0.1 ping statistics --- 00:08:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.496 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1311899 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1311899 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1311899 ']' 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 [2024-12-09 15:01:45.696266] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:44.496 [2024-12-09 15:01:45.696316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.496 [2024-12-09 15:01:45.777271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.496 [2024-12-09 15:01:45.819603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.496 [2024-12-09 15:01:45.819640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.496 [2024-12-09 15:01:45.819650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.496 [2024-12-09 15:01:45.819658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.496 [2024-12-09 15:01:45.819664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.496 [2024-12-09 15:01:45.821207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.496 [2024-12-09 15:01:45.821317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.496 [2024-12-09 15:01:45.821349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.496 [2024-12-09 15:01:45.821350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 [2024-12-09 15:01:45.959195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 Malloc0 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 [2024-12-09 15:01:46.021936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:44.496 test case1: single bdev can't be used in multiple subsystems 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:44.496 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.497 [2024-12-09 15:01:46.053877] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:44.497 [2024-12-09 15:01:46.053899] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:44.497 [2024-12-09 15:01:46.053908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.497 request: 00:08:44.497 { 00:08:44.497 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:44.497 "namespace": { 00:08:44.497 "bdev_name": "Malloc0", 00:08:44.497 "no_auto_visible": false, 00:08:44.497 "hide_metadata": false 00:08:44.497 }, 00:08:44.497 "method": "nvmf_subsystem_add_ns", 00:08:44.497 "req_id": 1 00:08:44.497 } 00:08:44.497 Got JSON-RPC error response 00:08:44.497 response: 00:08:44.497 { 00:08:44.497 "code": -32602, 00:08:44.497 "message": "Invalid parameters" 00:08:44.497 } 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:44.497 Adding namespace failed - expected result. 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:44.497 test case2: host connect to nvmf target in multiple paths 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.497 [2024-12-09 15:01:46.066019] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.497 15:01:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.431 15:01:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:46.806 15:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.806 15:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:46.806 15:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.806 15:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:46.806 15:01:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:48.708 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:48.708 [global] 00:08:48.708 thread=1 00:08:48.708 invalidate=1 00:08:48.708 rw=write 00:08:48.708 time_based=1 00:08:48.708 runtime=1 00:08:48.708 ioengine=libaio 00:08:48.708 direct=1 00:08:48.708 bs=4096 00:08:48.708 iodepth=1 00:08:48.708 norandommap=0 00:08:48.708 numjobs=1 00:08:48.708 00:08:48.708 verify_dump=1 00:08:48.708 verify_backlog=512 00:08:48.708 verify_state_save=0 00:08:48.708 do_verify=1 00:08:48.708 verify=crc32c-intel 00:08:48.708 [job0] 00:08:48.708 filename=/dev/nvme0n1 00:08:48.708 Could not set queue depth (nvme0n1) 00:08:48.967 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.967 fio-3.35 00:08:48.967 Starting 1 thread 00:08:50.341 00:08:50.341 job0: (groupid=0, jobs=1): err= 0: pid=1312961: Mon Dec 9 15:01:51 2024 00:08:50.341 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:08:50.341 slat (nsec): min=8871, max=23787, avg=22437.23, stdev=3065.35 00:08:50.341 clat (usec): min=40866, max=42035, avg=41009.83, stdev=233.80 00:08:50.341 lat (usec): min=40889, max=42056, avg=41032.27, stdev=233.69 00:08:50.341 clat percentiles (usec): 00:08:50.341 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:50.341 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:50.341 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:50.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:50.341 | 99.99th=[42206] 00:08:50.341 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:08:50.341 slat (usec): min=9, max=29120, avg=68.06, stdev=1286.47 00:08:50.341 clat (usec): min=114, max=378, avg=132.53, stdev=24.13 00:08:50.341 lat (usec): min=125, max=29454, avg=200.59, stdev=1295.63 00:08:50.341 clat percentiles (usec): 00:08:50.341 | 1.00th=[ 118], 5.00th=[ 119], 10.00th=[ 120], 20.00th=[ 122], 00:08:50.341 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 128], 00:08:50.341 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 161], 95.00th=[ 172], 00:08:50.341 | 99.00th=[ 206], 99.50th=[ 334], 99.90th=[ 379], 99.95th=[ 379], 00:08:50.341 | 99.99th=[ 379] 00:08:50.341 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:50.341 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:50.341 lat (usec) : 250=95.13%, 500=0.75% 00:08:50.341 lat (msec) : 50=4.12% 00:08:50.341 cpu : usr=0.50%, sys=0.30%, ctx=538, majf=0, minf=1 00:08:50.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.341 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.341 00:08:50.341 Run status group 0 (all jobs): 00:08:50.341 READ: bw=87.4KiB/s (89.5kB/s), 87.4KiB/s-87.4KiB/s (89.5kB/s-89.5kB/s), io=88.0KiB (90.1kB), run=1007-1007msec 00:08:50.341 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:08:50.341 00:08:50.341 Disk stats (read/write): 00:08:50.341 nvme0n1: ios=71/512, merge=0/0, ticks=1661/63, in_queue=1724, util=98.70% 00:08:50.341 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.341 rmmod nvme_tcp 00:08:50.341 rmmod nvme_fabrics 00:08:50.341 rmmod nvme_keyring 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1311899 ']' 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1311899 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1311899 ']' 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1311899 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.341 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1311899 00:08:50.600 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1311899' 00:08:50.601 killing process with pid 1311899 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1311899 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1311899 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.601 15:01:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.137 00:08:53.137 real 0m14.965s 00:08:53.137 user 0m33.429s 00:08:53.137 sys 0m5.216s 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.137 ************************************ 00:08:53.137 END TEST nvmf_nmic 00:08:53.137 ************************************ 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.137 ************************************ 00:08:53.137 START TEST nvmf_fio_target 00:08:53.137 ************************************ 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:53.137 * Looking for test storage... 00:08:53.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.137 --rc genhtml_branch_coverage=1 00:08:53.137 --rc genhtml_function_coverage=1 00:08:53.137 --rc genhtml_legend=1 00:08:53.137 --rc geninfo_all_blocks=1 00:08:53.137 --rc geninfo_unexecuted_blocks=1 00:08:53.137 00:08:53.137 ' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.137 --rc genhtml_branch_coverage=1 00:08:53.137 --rc genhtml_function_coverage=1 00:08:53.137 --rc genhtml_legend=1 00:08:53.137 --rc geninfo_all_blocks=1 00:08:53.137 --rc geninfo_unexecuted_blocks=1 00:08:53.137 00:08:53.137 ' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.137 --rc genhtml_branch_coverage=1 00:08:53.137 --rc genhtml_function_coverage=1 00:08:53.137 --rc genhtml_legend=1 00:08:53.137 --rc geninfo_all_blocks=1 00:08:53.137 --rc geninfo_unexecuted_blocks=1 00:08:53.137 00:08:53.137 ' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.137 --rc genhtml_branch_coverage=1 00:08:53.137 --rc genhtml_function_coverage=1 00:08:53.137 --rc genhtml_legend=1 00:08:53.137 --rc geninfo_all_blocks=1 00:08:53.137 --rc geninfo_unexecuted_blocks=1 00:08:53.137 00:08:53.137 ' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.137 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.138 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:59.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:59.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.705 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:59.706 Found net devices under 0000:af:00.0: cvl_0_0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:59.706 Found net devices under 0000:af:00.1: cvl_0_1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:08:59.706 00:08:59.706 --- 10.0.0.2 ping statistics --- 00:08:59.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.706 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:08:59.706 00:08:59.706 --- 10.0.0.1 ping statistics --- 00:08:59.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.706 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1316692 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1316692 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1316692 ']' 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.706 [2024-12-09 15:02:00.701065] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:08:59.706 [2024-12-09 15:02:00.701117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.706 [2024-12-09 15:02:00.778750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.706 [2024-12-09 15:02:00.818550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.706 [2024-12-09 15:02:00.818587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.706 [2024-12-09 15:02:00.818594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.706 [2024-12-09 15:02:00.818600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.706 [2024-12-09 15:02:00.818604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.706 [2024-12-09 15:02:00.820090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.706 [2024-12-09 15:02:00.820195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.706 [2024-12-09 15:02:00.820282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.706 [2024-12-09 15:02:00.820283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.706 15:02:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.706 [2024-12-09 15:02:01.130426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.706 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.707 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:59.707 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.965 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:59.965 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.224 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:00.224 15:02:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.482 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:00.482 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:00.482 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.741 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:00.741 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.999 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:00.999 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.258 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:01.258 15:02:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:01.517 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.517 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:01.517 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.776 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:01.776 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.034 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.034 [2024-12-09 15:02:03.823547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.294 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:02.294 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:02.552 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:03.935 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.839 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:05.840 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:05.840 [global] 00:09:05.840 thread=1 00:09:05.840 invalidate=1 00:09:05.840 rw=write 00:09:05.840 time_based=1 00:09:05.840 runtime=1 00:09:05.840 ioengine=libaio 00:09:05.840 direct=1 00:09:05.840 bs=4096 00:09:05.840 iodepth=1 00:09:05.840 norandommap=0 00:09:05.840 numjobs=1 00:09:05.840 00:09:05.840 verify_dump=1 00:09:05.840 verify_backlog=512 00:09:05.840 verify_state_save=0 00:09:05.840 do_verify=1 00:09:05.840 verify=crc32c-intel 00:09:05.840 [job0] 00:09:05.840 filename=/dev/nvme0n1 00:09:05.840 [job1] 00:09:05.840 filename=/dev/nvme0n2 00:09:05.840 [job2] 00:09:05.840 filename=/dev/nvme0n3 00:09:05.840 [job3] 00:09:05.840 filename=/dev/nvme0n4 00:09:05.840 Could not set queue depth (nvme0n1) 00:09:05.840 Could not set queue depth (nvme0n2) 00:09:05.840 Could not set queue depth (nvme0n3) 00:09:05.840 Could not set queue depth (nvme0n4) 00:09:06.098 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.098 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.098 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.098 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.098 fio-3.35 00:09:06.098 Starting 4 threads 00:09:07.475 00:09:07.475 job0: (groupid=0, jobs=1): err= 0: pid=1318031: Mon Dec 9 15:02:09 2024 00:09:07.475 read: IOPS=2504, BW=9.78MiB/s (10.3MB/s)(9.79MiB/1001msec) 00:09:07.475 slat (nsec): min=6706, max=32679, avg=7586.60, stdev=1082.43 00:09:07.475 clat (usec): min=178, max=333, avg=218.58, stdev=19.77 00:09:07.475 lat (usec): min=186, max=341, avg=226.17, stdev=19.80 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:09:07.475 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:09:07.475 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 255], 00:09:07.475 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 314], 00:09:07.475 | 99.99th=[ 334] 00:09:07.475 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:07.475 slat (nsec): min=9782, max=40463, avg=11069.05, stdev=1199.19 00:09:07.475 clat (usec): min=112, max=348, avg=153.11, stdev=21.31 00:09:07.475 lat (usec): min=123, max=389, avg=164.18, stdev=21.45 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:09:07.475 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:09:07.475 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 198], 00:09:07.475 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 255], 99.95th=[ 273], 00:09:07.475 | 99.99th=[ 351] 00:09:07.475 bw ( KiB/s): min=12288, max=12288, per=50.99%, avg=12288.00, stdev= 0.00, samples=1 00:09:07.475 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:07.475 lat (usec) : 250=95.60%, 500=4.40% 00:09:07.475 cpu : usr=2.80%, sys=4.80%, ctx=5070, majf=0, minf=1 00:09:07.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 issued rwts: total=2507,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.475 job1: (groupid=0, jobs=1): err= 0: pid=1318032: Mon Dec 9 15:02:09 2024 00:09:07.475 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:09:07.475 slat (nsec): min=9779, max=24564, avg=22321.43, stdev=3351.90 00:09:07.475 clat (usec): min=40746, max=42003, avg=41054.95, stdev=304.69 00:09:07.475 lat (usec): min=40756, max=42026, avg=41077.27, stdev=305.52 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:07.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:07.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:07.475 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:07.475 | 99.99th=[42206] 00:09:07.475 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:07.475 slat (nsec): min=6305, max=28031, avg=10612.15, stdev=2208.49 00:09:07.475 clat (usec): min=136, max=340, avg=163.64, stdev=14.61 00:09:07.475 lat (usec): min=144, max=366, avg=174.25, stdev=15.14 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 153], 00:09:07.475 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:07.475 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:09:07.475 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 343], 99.95th=[ 343], 00:09:07.475 | 99.99th=[ 343] 00:09:07.475 bw ( KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:07.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:07.475 lat (usec) : 250=95.51%, 500=0.19% 00:09:07.475 lat (msec) : 50=4.30% 00:09:07.475 cpu : usr=0.19%, sys=0.48%, ctx=537, majf=0, minf=1 00:09:07.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.475 job2: (groupid=0, jobs=1): err= 0: pid=1318033: Mon Dec 9 15:02:09 2024 00:09:07.475 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:07.475 slat (nsec): min=6919, max=21059, avg=7652.00, stdev=698.96 00:09:07.475 clat (usec): min=167, max=267, avg=205.53, stdev=13.45 00:09:07.475 lat (usec): min=174, max=275, avg=213.18, stdev=13.48 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:09:07.475 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:07.475 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:09:07.475 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 269], 00:09:07.475 | 99.99th=[ 269] 00:09:07.475 write: IOPS=2655, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:09:07.475 slat (nsec): min=9724, max=36886, avg=11092.39, stdev=1178.09 00:09:07.475 clat (usec): min=118, max=324, avg=154.90, stdev=13.56 00:09:07.475 lat (usec): min=128, max=361, avg=166.00, stdev=13.79 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:07.475 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:09:07.475 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:09:07.475 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 208], 00:09:07.475 | 99.99th=[ 326] 00:09:07.475 bw ( KiB/s): min=12288, max=12288, per=50.99%, avg=12288.00, stdev= 0.00, samples=1 00:09:07.475 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:07.475 lat (usec) : 250=99.44%, 500=0.56% 00:09:07.475 cpu : usr=2.10%, sys=5.70%, ctx=5219, majf=0, minf=1 00:09:07.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 issued rwts: total=2560,2658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.475 job3: (groupid=0, jobs=1): err= 0: pid=1318035: Mon Dec 9 15:02:09 2024 00:09:07.475 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:09:07.475 slat (nsec): min=10115, max=23612, avg=14519.00, stdev=3077.83 00:09:07.475 clat (usec): min=268, max=41140, avg=39201.30, stdev=8487.63 00:09:07.475 lat (usec): min=280, max=41156, avg=39215.82, stdev=8488.18 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:07.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:07.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:07.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:07.475 | 99.99th=[41157] 00:09:07.475 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:07.475 slat (nsec): min=11727, max=39616, avg=14151.39, stdev=2383.67 00:09:07.475 clat (usec): min=135, max=304, avg=182.63, stdev=17.71 00:09:07.475 lat (usec): min=148, max=344, avg=196.78, stdev=18.06 00:09:07.475 clat percentiles (usec): 00:09:07.475 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:09:07.475 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:07.475 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:09:07.475 | 99.00th=[ 235], 99.50th=[ 239], 99.90th=[ 306], 99.95th=[ 306], 00:09:07.475 | 99.99th=[ 306] 00:09:07.475 bw ( KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:07.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:07.475 lat (usec) : 250=95.33%, 500=0.56% 00:09:07.475 lat (msec) : 50=4.11% 00:09:07.475 cpu : usr=0.20%, sys=0.80%, ctx=535, majf=0, minf=3 00:09:07.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.475 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.475 00:09:07.475 Run status group 0 (all jobs): 00:09:07.475 READ: bw=19.3MiB/s (20.2MB/s), 88.8KiB/s-9.99MiB/s (90.9kB/s-10.5MB/s), io=20.0MiB (20.9MB), run=1001-1036msec 00:09:07.475 WRITE: bw=23.5MiB/s (24.7MB/s), 1977KiB/s-10.4MiB/s (2024kB/s-10.9MB/s), io=24.4MiB (25.6MB), run=1001-1036msec 00:09:07.475 00:09:07.475 Disk stats (read/write): 00:09:07.475 nvme0n1: ios=2072/2222, merge=0/0, ticks=1298/329, in_queue=1627, util=85.07% 00:09:07.475 nvme0n2: ios=41/512, merge=0/0, ticks=1643/80, in_queue=1723, util=89.00% 00:09:07.475 nvme0n3: ios=2105/2385, merge=0/0, ticks=652/371, in_queue=1023, util=93.07% 00:09:07.475 nvme0n4: ios=76/512, merge=0/0, ticks=804/91, in_queue=895, util=95.13% 00:09:07.475 15:02:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:07.475 [global] 00:09:07.475 thread=1 00:09:07.475 invalidate=1 00:09:07.475 rw=randwrite 00:09:07.475 time_based=1 00:09:07.475 runtime=1 00:09:07.475 ioengine=libaio 00:09:07.475 direct=1 00:09:07.475 bs=4096 00:09:07.475 iodepth=1 00:09:07.475 norandommap=0 00:09:07.475 numjobs=1 00:09:07.475 00:09:07.475 verify_dump=1 00:09:07.475 verify_backlog=512 00:09:07.475 verify_state_save=0 00:09:07.475 do_verify=1 00:09:07.475 verify=crc32c-intel 00:09:07.475 [job0] 00:09:07.475 filename=/dev/nvme0n1 00:09:07.475 [job1] 00:09:07.475 filename=/dev/nvme0n2 00:09:07.475 [job2] 00:09:07.475 filename=/dev/nvme0n3 00:09:07.475 [job3] 00:09:07.475 filename=/dev/nvme0n4 00:09:07.475 Could not set queue depth (nvme0n1) 00:09:07.475 Could not set queue depth (nvme0n2) 00:09:07.475 Could not set queue depth (nvme0n3) 00:09:07.475 Could not set queue depth (nvme0n4) 00:09:07.734 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.734 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.734 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.734 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.734 fio-3.35 00:09:07.734 Starting 4 threads 00:09:09.111 00:09:09.111 job0: (groupid=0, jobs=1): err= 0: pid=1318404: Mon Dec 9 15:02:10 2024 00:09:09.111 read: IOPS=127, BW=510KiB/s (522kB/s)(524KiB/1028msec) 00:09:09.111 slat (nsec): min=6931, max=41854, avg=10957.32, stdev=6420.81 00:09:09.111 clat (usec): min=175, max=41120, avg=7052.17, stdev=15285.16 00:09:09.111 lat (usec): min=184, max=41141, avg=7063.13, stdev=15290.47 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:09:09.111 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 210], 00:09:09.111 | 70.00th=[ 217], 80.00th=[ 239], 90.00th=[41157], 95.00th=[41157], 00:09:09.111 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.111 | 99.99th=[41157] 00:09:09.111 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:09.111 slat (nsec): min=9594, max=81305, avg=11339.69, stdev=3830.84 00:09:09.111 clat (usec): min=151, max=373, avg=184.72, stdev=23.25 00:09:09.111 lat (usec): min=163, max=383, avg=196.06, stdev=23.93 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:09:09.111 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:09:09.111 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 223], 95.00th=[ 237], 00:09:09.111 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 375], 99.95th=[ 375], 00:09:09.111 | 99.99th=[ 375] 00:09:09.111 bw ( KiB/s): min= 4087, max= 4087, per=34.19%, avg=4087.00, stdev= 0.00, samples=1 00:09:09.111 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:09.111 lat (usec) : 250=94.56%, 500=1.87%, 750=0.16% 00:09:09.111 lat (msec) : 50=3.42% 00:09:09.111 cpu : usr=0.78%, sys=0.68%, ctx=644, majf=0, minf=1 00:09:09.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.111 issued rwts: total=131,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.111 job1: (groupid=0, jobs=1): err= 0: pid=1318405: Mon Dec 9 15:02:10 2024 00:09:09.111 read: IOPS=1029, BW=4119KiB/s (4218kB/s)(4148KiB/1007msec) 00:09:09.111 slat (nsec): min=7332, max=40768, avg=8549.55, stdev=2378.61 00:09:09.111 clat (usec): min=167, max=41175, avg=710.79, stdev=4540.11 00:09:09.111 lat (usec): min=176, max=41186, avg=719.34, stdev=4541.71 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:09.111 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:09:09.111 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 215], 95.00th=[ 221], 00:09:09.111 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.111 | 99.99th=[41157] 00:09:09.111 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:09:09.111 slat (nsec): min=10263, max=36616, avg=11809.93, stdev=2075.97 00:09:09.111 clat (usec): min=111, max=1976, avg=152.66, stdev=57.99 00:09:09.111 lat (usec): min=122, max=1986, avg=164.47, stdev=58.44 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 127], 00:09:09.111 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 151], 00:09:09.111 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 215], 00:09:09.111 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 889], 99.95th=[ 1975], 00:09:09.111 | 99.99th=[ 1975] 00:09:09.111 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:09.111 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:09.111 lat (usec) : 250=98.68%, 500=0.74%, 1000=0.04% 00:09:09.111 lat (msec) : 2=0.04%, 50=0.51% 00:09:09.111 cpu : usr=2.29%, sys=3.98%, ctx=2575, majf=0, minf=1 00:09:09.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.111 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.111 job2: (groupid=0, jobs=1): err= 0: pid=1318415: Mon Dec 9 15:02:10 2024 00:09:09.111 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:09:09.111 slat (nsec): min=9394, max=31236, avg=22334.73, stdev=3445.30 00:09:09.111 clat (usec): min=40845, max=41060, avg=40970.13, stdev=54.23 00:09:09.111 lat (usec): min=40868, max=41071, avg=40992.47, stdev=53.72 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:09.111 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:09.111 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.111 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.111 | 99.99th=[41157] 00:09:09.111 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:09.111 slat (nsec): min=9127, max=77766, avg=11288.15, stdev=3827.01 00:09:09.111 clat (usec): min=125, max=351, avg=198.17, stdev=22.49 00:09:09.111 lat (usec): min=136, max=428, avg=209.46, stdev=24.32 00:09:09.111 clat percentiles (usec): 00:09:09.111 | 1.00th=[ 133], 5.00th=[ 161], 10.00th=[ 178], 20.00th=[ 186], 00:09:09.111 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 202], 00:09:09.111 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 233], 00:09:09.111 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 351], 00:09:09.111 | 99.99th=[ 351] 00:09:09.111 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.112 lat (usec) : 250=94.94%, 500=0.94% 00:09:09.112 lat (msec) : 50=4.12% 00:09:09.112 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:09.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.112 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.112 job3: (groupid=0, jobs=1): err= 0: pid=1318416: Mon Dec 9 15:02:10 2024 00:09:09.112 read: IOPS=34, BW=139KiB/s (142kB/s)(140KiB/1010msec) 00:09:09.112 slat (nsec): min=8180, max=24711, avg=17494.71, stdev=6875.75 00:09:09.112 clat (usec): min=194, max=41500, avg=25851.56, stdev=19989.50 00:09:09.112 lat (usec): min=203, max=41508, avg=25869.05, stdev=19992.61 00:09:09.112 clat percentiles (usec): 00:09:09.112 | 1.00th=[ 196], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:09:09.112 | 30.00th=[ 245], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:09.112 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.112 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:09.112 | 99.99th=[41681] 00:09:09.112 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:09.112 slat (nsec): min=10475, max=78413, avg=12136.23, stdev=3316.55 00:09:09.112 clat (usec): min=145, max=276, avg=188.44, stdev=15.31 00:09:09.112 lat (usec): min=158, max=354, avg=200.58, stdev=16.32 00:09:09.112 clat percentiles (usec): 00:09:09.112 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:09:09.112 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:09.112 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 212], 00:09:09.112 | 99.00th=[ 221], 99.50th=[ 223], 99.90th=[ 277], 99.95th=[ 277], 00:09:09.112 | 99.99th=[ 277] 00:09:09.112 bw ( KiB/s): min= 4096, max= 4096, per=34.27%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.112 lat (usec) : 250=95.43%, 500=0.55% 00:09:09.112 lat (msec) : 50=4.02% 00:09:09.112 cpu : usr=0.40%, sys=0.50%, ctx=548, majf=0, minf=1 00:09:09.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.112 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.112 00:09:09.112 Run status group 0 (all jobs): 00:09:09.112 READ: bw=4767KiB/s (4881kB/s), 87.0KiB/s-4119KiB/s (89.1kB/s-4218kB/s), io=4900KiB (5018kB), run=1007-1028msec 00:09:09.112 WRITE: bw=11.7MiB/s (12.2MB/s), 1992KiB/s-6101KiB/s (2040kB/s-6248kB/s), io=12.0MiB (12.6MB), run=1007-1028msec 00:09:09.112 00:09:09.112 Disk stats (read/write): 00:09:09.112 nvme0n1: ios=166/512, merge=0/0, ticks=917/93, in_queue=1010, util=98.50% 00:09:09.112 nvme0n2: ios=1082/1536, merge=0/0, ticks=1028/217, in_queue=1245, util=90.24% 00:09:09.112 nvme0n3: ios=73/512, merge=0/0, ticks=726/99, in_queue=825, util=89.74% 00:09:09.112 nvme0n4: ios=83/512, merge=0/0, ticks=793/92, in_queue=885, util=93.97% 00:09:09.112 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:09.112 [global] 00:09:09.112 thread=1 00:09:09.112 invalidate=1 00:09:09.112 rw=write 00:09:09.112 time_based=1 00:09:09.112 runtime=1 00:09:09.112 ioengine=libaio 00:09:09.112 direct=1 00:09:09.112 bs=4096 00:09:09.112 iodepth=128 00:09:09.112 norandommap=0 00:09:09.112 numjobs=1 00:09:09.112 00:09:09.112 verify_dump=1 00:09:09.112 verify_backlog=512 00:09:09.112 verify_state_save=0 00:09:09.112 do_verify=1 00:09:09.112 verify=crc32c-intel 00:09:09.112 [job0] 00:09:09.112 filename=/dev/nvme0n1 00:09:09.112 [job1] 00:09:09.112 filename=/dev/nvme0n2 00:09:09.112 [job2] 00:09:09.112 filename=/dev/nvme0n3 00:09:09.112 [job3] 00:09:09.112 filename=/dev/nvme0n4 00:09:09.112 Could not set queue depth (nvme0n1) 00:09:09.112 Could not set queue depth (nvme0n2) 00:09:09.112 Could not set queue depth (nvme0n3) 00:09:09.112 Could not set queue depth (nvme0n4) 00:09:09.371 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.371 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.371 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.371 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.371 fio-3.35 00:09:09.371 Starting 4 threads 00:09:10.779 00:09:10.779 job0: (groupid=0, jobs=1): err= 0: pid=1318862: Mon Dec 9 15:02:12 2024 00:09:10.779 read: IOPS=6576, BW=25.7MiB/s (26.9MB/s)(26.0MiB/1013msec) 00:09:10.779 slat (nsec): min=1397, max=10601k, avg=76828.18, stdev=585079.46 00:09:10.779 clat (usec): min=3420, max=30903, avg=9920.91, stdev=3402.55 00:09:10.779 lat (usec): min=3430, max=30929, avg=9997.74, stdev=3449.26 00:09:10.779 clat percentiles (usec): 00:09:10.779 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:09:10.779 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 9372], 00:09:10.779 | 70.00th=[10945], 80.00th=[12911], 90.00th=[14222], 95.00th=[16188], 00:09:10.779 | 99.00th=[22152], 99.50th=[23200], 99.90th=[24773], 99.95th=[24773], 00:09:10.779 | 99.99th=[30802] 00:09:10.779 write: IOPS=7076, BW=27.6MiB/s (29.0MB/s)(28.0MiB/1013msec); 0 zone resets 00:09:10.779 slat (usec): min=2, max=7221, avg=61.74, stdev=355.11 00:09:10.779 clat (usec): min=1516, max=28020, avg=8703.15, stdev=3760.36 00:09:10.779 lat (usec): min=1530, max=28024, avg=8764.89, stdev=3793.25 00:09:10.779 clat percentiles (usec): 00:09:10.779 | 1.00th=[ 3490], 5.00th=[ 4817], 10.00th=[ 5932], 20.00th=[ 6849], 00:09:10.779 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8029], 00:09:10.779 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[15926], 95.00th=[17171], 00:09:10.779 | 99.00th=[22152], 99.50th=[24249], 99.90th=[27132], 99.95th=[27919], 00:09:10.779 | 99.99th=[27919] 00:09:10.779 bw ( KiB/s): min=27792, max=28584, per=43.70%, avg=28188.00, stdev=560.03, samples=2 00:09:10.779 iops : min= 6948, max= 7146, avg=7047.00, stdev=140.01, samples=2 00:09:10.779 lat (msec) : 2=0.01%, 4=1.47%, 10=72.74%, 20=23.44%, 50=2.34% 00:09:10.779 cpu : usr=6.62%, sys=8.50%, ctx=639, majf=0, minf=1 00:09:10.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:10.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.780 issued rwts: total=6662,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.780 job1: (groupid=0, jobs=1): err= 0: pid=1318882: Mon Dec 9 15:02:12 2024 00:09:10.780 read: IOPS=2548, BW=9.96MiB/s (10.4MB/s)(10.1MiB/1013msec) 00:09:10.780 slat (nsec): min=1190, max=21897k, avg=146165.12, stdev=1078133.41 00:09:10.780 clat (usec): min=4651, max=60267, avg=18909.70, stdev=9148.12 00:09:10.780 lat (usec): min=4658, max=60281, avg=19055.87, stdev=9226.46 00:09:10.780 clat percentiles (usec): 00:09:10.780 | 1.00th=[ 6194], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11469], 00:09:10.780 | 30.00th=[13698], 40.00th=[15139], 50.00th=[17433], 60.00th=[19268], 00:09:10.780 | 70.00th=[21103], 80.00th=[23725], 90.00th=[27919], 95.00th=[40109], 00:09:10.780 | 99.00th=[52167], 99.50th=[55313], 99.90th=[60031], 99.95th=[60031], 00:09:10.780 | 99.99th=[60031] 00:09:10.780 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:09:10.780 slat (usec): min=2, max=12209, avg=177.69, stdev=888.90 00:09:10.780 clat (msec): min=2, max=115, avg=25.94, stdev=22.25 00:09:10.780 lat (msec): min=2, max=115, avg=26.12, stdev=22.40 00:09:10.780 clat percentiles (msec): 00:09:10.780 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 9], 00:09:10.780 | 30.00th=[ 12], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 19], 00:09:10.780 | 70.00th=[ 39], 80.00th=[ 44], 90.00th=[ 51], 95.00th=[ 69], 00:09:10.780 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 116], 99.95th=[ 116], 00:09:10.780 | 99.99th=[ 116] 00:09:10.780 bw ( KiB/s): min=11448, max=12288, per=18.40%, avg=11868.00, stdev=593.97, samples=2 00:09:10.780 iops : min= 2862, max= 3072, avg=2967.00, stdev=148.49, samples=2 00:09:10.780 lat (msec) : 4=0.39%, 10=16.57%, 20=46.73%, 50=30.17%, 100=4.90% 00:09:10.780 lat (msec) : 250=1.24% 00:09:10.780 cpu : usr=2.37%, sys=3.95%, ctx=304, majf=0, minf=1 00:09:10.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:10.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.780 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.780 job2: (groupid=0, jobs=1): err= 0: pid=1318900: Mon Dec 9 15:02:12 2024 00:09:10.780 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:09:10.780 slat (nsec): min=1916, max=29978k, avg=162431.98, stdev=1368458.23 00:09:10.780 clat (usec): min=8494, max=66753, avg=23228.74, stdev=12423.63 00:09:10.780 lat (usec): min=8503, max=66779, avg=23391.17, stdev=12540.57 00:09:10.780 clat percentiles (usec): 00:09:10.780 | 1.00th=[11207], 5.00th=[11600], 10.00th=[11994], 20.00th=[13960], 00:09:10.780 | 30.00th=[14877], 40.00th=[15270], 50.00th=[16057], 60.00th=[19006], 00:09:10.780 | 70.00th=[28181], 80.00th=[35390], 90.00th=[43779], 95.00th=[50594], 00:09:10.780 | 99.00th=[52691], 99.50th=[52691], 99.90th=[62129], 99.95th=[65799], 00:09:10.780 | 99.99th=[66847] 00:09:10.780 write: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1008msec); 0 zone resets 00:09:10.780 slat (usec): min=2, max=19955, avg=188.07, stdev=1283.81 00:09:10.780 clat (usec): min=1157, max=110802, avg=21791.29, stdev=16088.10 00:09:10.780 lat (msec): min=8, max=113, avg=21.98, stdev=16.21 00:09:10.780 clat percentiles (msec): 00:09:10.780 | 1.00th=[ 13], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 13], 00:09:10.780 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 19], 00:09:10.780 | 70.00th=[ 22], 80.00th=[ 30], 90.00th=[ 34], 95.00th=[ 51], 00:09:10.780 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 111], 00:09:10.780 | 99.99th=[ 111] 00:09:10.780 bw ( KiB/s): min=10864, max=12288, per=17.95%, avg=11576.00, stdev=1006.92, samples=2 00:09:10.780 iops : min= 2716, max= 3072, avg=2894.00, stdev=251.73, samples=2 00:09:10.780 lat (msec) : 2=0.02%, 10=0.47%, 20=64.99%, 50=28.91%, 100=5.05% 00:09:10.780 lat (msec) : 250=0.56% 00:09:10.780 cpu : usr=3.08%, sys=4.57%, ctx=173, majf=0, minf=1 00:09:10.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:10.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.780 issued rwts: total=2560,3022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.780 job3: (groupid=0, jobs=1): err= 0: pid=1318911: Mon Dec 9 15:02:12 2024 00:09:10.780 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1008msec) 00:09:10.780 slat (usec): min=2, max=23194, avg=167.30, stdev=1155.66 00:09:10.780 clat (usec): min=3057, max=71145, avg=18543.31, stdev=10609.25 00:09:10.780 lat (usec): min=5555, max=71154, avg=18710.61, stdev=10737.56 00:09:10.780 clat percentiles (usec): 00:09:10.780 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[12518], 00:09:10.780 | 30.00th=[13566], 40.00th=[14615], 50.00th=[15139], 60.00th=[16909], 00:09:10.780 | 70.00th=[17695], 80.00th=[21890], 90.00th=[29230], 95.00th=[44303], 00:09:10.780 | 99.00th=[61604], 99.50th=[65799], 99.90th=[70779], 99.95th=[70779], 00:09:10.780 | 99.99th=[70779] 00:09:10.780 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:09:10.780 slat (usec): min=2, max=36251, avg=164.44, stdev=1011.21 00:09:10.780 clat (usec): min=3700, max=74059, avg=23693.08, stdev=17238.04 00:09:10.780 lat (usec): min=3711, max=74072, avg=23857.52, stdev=17349.36 00:09:10.780 clat percentiles (usec): 00:09:10.780 | 1.00th=[ 6259], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:09:10.780 | 30.00th=[10421], 40.00th=[12125], 50.00th=[14877], 60.00th=[18744], 00:09:10.780 | 70.00th=[36439], 80.00th=[42730], 90.00th=[47973], 95.00th=[54264], 00:09:10.780 | 99.00th=[70779], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:09:10.780 | 99.99th=[73925] 00:09:10.780 bw ( KiB/s): min=12288, max=12288, per=19.05%, avg=12288.00, stdev= 0.00, samples=2 00:09:10.780 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:10.780 lat (msec) : 4=0.12%, 10=19.37%, 20=48.61%, 50=26.39%, 100=5.51% 00:09:10.780 cpu : usr=2.68%, sys=5.16%, ctx=260, majf=0, minf=1 00:09:10.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:10.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.780 issued rwts: total=2843,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.780 00:09:10.780 Run status group 0 (all jobs): 00:09:10.780 READ: bw=56.5MiB/s (59.2MB/s), 9.92MiB/s-25.7MiB/s (10.4MB/s-26.9MB/s), io=57.2MiB (60.0MB), run=1008-1013msec 00:09:10.780 WRITE: bw=63.0MiB/s (66.0MB/s), 11.7MiB/s-27.6MiB/s (12.3MB/s-29.0MB/s), io=63.8MiB (66.9MB), run=1008-1013msec 00:09:10.780 00:09:10.780 Disk stats (read/write): 00:09:10.780 nvme0n1: ios=5870/6144, merge=0/0, ticks=52847/48961, in_queue=101808, util=96.79% 00:09:10.780 nvme0n2: ios=2098/2182, merge=0/0, ticks=39316/65667, in_queue=104983, util=96.74% 00:09:10.780 nvme0n3: ios=2339/2560, merge=0/0, ticks=24961/26971, in_queue=51932, util=97.26% 00:09:10.780 nvme0n4: ios=2083/2527, merge=0/0, ticks=34929/61676, in_queue=96605, util=98.30% 00:09:10.780 15:02:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:10.780 [global] 00:09:10.780 thread=1 00:09:10.780 invalidate=1 00:09:10.780 rw=randwrite 00:09:10.780 time_based=1 00:09:10.780 runtime=1 00:09:10.780 ioengine=libaio 00:09:10.780 direct=1 00:09:10.780 bs=4096 00:09:10.780 iodepth=128 00:09:10.780 norandommap=0 00:09:10.780 numjobs=1 00:09:10.780 00:09:10.780 verify_dump=1 00:09:10.780 verify_backlog=512 00:09:10.780 verify_state_save=0 00:09:10.780 do_verify=1 00:09:10.780 verify=crc32c-intel 00:09:10.780 [job0] 00:09:10.780 filename=/dev/nvme0n1 00:09:10.780 [job1] 00:09:10.780 filename=/dev/nvme0n2 00:09:10.780 [job2] 00:09:10.780 filename=/dev/nvme0n3 00:09:10.780 [job3] 00:09:10.780 filename=/dev/nvme0n4 00:09:10.780 Could not set queue depth (nvme0n1) 00:09:10.780 Could not set queue depth (nvme0n2) 00:09:10.780 Could not set queue depth (nvme0n3) 00:09:10.780 Could not set queue depth (nvme0n4) 00:09:11.042 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.042 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.042 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.042 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.042 fio-3.35 00:09:11.042 Starting 4 threads 00:09:12.414 00:09:12.414 job0: (groupid=0, jobs=1): err= 0: pid=1319341: Mon Dec 9 15:02:13 2024 00:09:12.414 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1004msec) 00:09:12.414 slat (nsec): min=1248, max=10539k, avg=113740.53, stdev=776498.91 00:09:12.414 clat (usec): min=2730, max=66541, avg=13696.09, stdev=9012.09 00:09:12.414 lat (usec): min=3568, max=66551, avg=13809.83, stdev=9083.37 00:09:12.414 clat percentiles (usec): 00:09:12.414 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 8029], 20.00th=[ 9634], 00:09:12.414 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:09:12.414 | 70.00th=[13173], 80.00th=[15664], 90.00th=[19530], 95.00th=[30016], 00:09:12.414 | 99.00th=[61604], 99.50th=[64226], 99.90th=[66323], 99.95th=[66323], 00:09:12.414 | 99.99th=[66323] 00:09:12.414 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:12.414 slat (usec): min=2, max=11137, avg=144.24, stdev=868.01 00:09:12.414 clat (usec): min=1015, max=130499, avg=23309.92, stdev=27112.22 00:09:12.414 lat (usec): min=1019, max=130511, avg=23454.16, stdev=27273.30 00:09:12.414 clat percentiles (msec): 00:09:12.414 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:09:12.414 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 16], 00:09:12.414 | 70.00th=[ 21], 80.00th=[ 25], 90.00th=[ 59], 95.00th=[ 105], 00:09:12.414 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 131], 99.95th=[ 131], 00:09:12.414 | 99.99th=[ 131] 00:09:12.414 bw ( KiB/s): min=13768, max=14896, per=19.87%, avg=14332.00, stdev=797.62, samples=2 00:09:12.414 iops : min= 3442, max= 3724, avg=3583.00, stdev=199.40, samples=2 00:09:12.414 lat (msec) : 2=0.34%, 4=0.72%, 10=28.21%, 20=48.11%, 50=15.29% 00:09:12.414 lat (msec) : 100=4.63%, 250=2.70% 00:09:12.414 cpu : usr=2.29%, sys=6.08%, ctx=299, majf=0, minf=1 00:09:12.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:12.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.414 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.414 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.414 job1: (groupid=0, jobs=1): err= 0: pid=1319359: Mon Dec 9 15:02:13 2024 00:09:12.414 read: IOPS=5423, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1003msec) 00:09:12.414 slat (nsec): min=1381, max=9911.5k, avg=89356.29, stdev=507056.25 00:09:12.414 clat (usec): min=711, max=21072, avg=11519.14, stdev=1903.65 00:09:12.414 lat (usec): min=3272, max=21088, avg=11608.49, stdev=1908.27 00:09:12.414 clat percentiles (usec): 00:09:12.414 | 1.00th=[ 3687], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10159], 00:09:12.415 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:09:12.415 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13304], 95.00th=[14091], 00:09:12.415 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:09:12.415 | 99.99th=[21103] 00:09:12.415 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:12.415 slat (usec): min=2, max=8750, avg=83.90, stdev=460.26 00:09:12.415 clat (usec): min=263, max=27706, avg=11376.60, stdev=2484.03 00:09:12.415 lat (usec): min=611, max=28292, avg=11460.50, stdev=2486.27 00:09:12.415 clat percentiles (usec): 00:09:12.415 | 1.00th=[ 3785], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[10290], 00:09:12.415 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:09:12.415 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13698], 00:09:12.415 | 99.00th=[20841], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:09:12.415 | 99.99th=[27657] 00:09:12.415 bw ( KiB/s): min=20480, max=24576, per=31.24%, avg=22528.00, stdev=2896.31, samples=2 00:09:12.415 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:12.415 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.01% 00:09:12.415 lat (msec) : 2=0.08%, 4=1.07%, 10=15.44%, 20=82.54%, 50=0.79% 00:09:12.415 cpu : usr=4.09%, sys=6.19%, ctx=555, majf=0, minf=1 00:09:12.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:12.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.415 issued rwts: total=5440,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.415 job2: (groupid=0, jobs=1): err= 0: pid=1319360: Mon Dec 9 15:02:13 2024 00:09:12.415 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:09:12.415 slat (nsec): min=1205, max=10504k, avg=101042.73, stdev=663615.36 00:09:12.415 clat (usec): min=5491, max=30248, avg=12903.90, stdev=3271.04 00:09:12.415 lat (usec): min=5503, max=30251, avg=13004.94, stdev=3330.00 00:09:12.415 clat percentiles (usec): 00:09:12.415 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:09:12.415 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:09:12.415 | 70.00th=[12911], 80.00th=[13698], 90.00th=[17433], 95.00th=[20055], 00:09:12.415 | 99.00th=[27132], 99.50th=[28443], 99.90th=[30278], 99.95th=[30278], 00:09:12.415 | 99.99th=[30278] 00:09:12.415 write: IOPS=4295, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1009msec); 0 zone resets 00:09:12.415 slat (usec): min=2, max=17355, avg=129.41, stdev=782.28 00:09:12.415 clat (msec): min=4, max=101, avg=17.27, stdev=14.57 00:09:12.415 lat (msec): min=4, max=101, avg=17.40, stdev=14.66 00:09:12.415 clat percentiles (msec): 00:09:12.415 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:09:12.415 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:09:12.415 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 38], 00:09:12.415 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:09:12.415 | 99.99th=[ 102] 00:09:12.415 bw ( KiB/s): min=13176, max=20480, per=23.33%, avg=16828.00, stdev=5164.71, samples=2 00:09:12.415 iops : min= 3294, max= 5120, avg=4207.00, stdev=1291.18, samples=2 00:09:12.415 lat (msec) : 10=9.42%, 20=74.96%, 50=13.74%, 100=1.80%, 250=0.08% 00:09:12.415 cpu : usr=2.18%, sys=6.45%, ctx=367, majf=0, minf=1 00:09:12.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:12.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.415 issued rwts: total=4096,4334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.415 job3: (groupid=0, jobs=1): err= 0: pid=1319361: Mon Dec 9 15:02:13 2024 00:09:12.415 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:12.415 slat (nsec): min=1046, max=10864k, avg=102920.60, stdev=666390.23 00:09:12.415 clat (usec): min=3883, max=34954, avg=12751.94, stdev=4500.06 00:09:12.415 lat (usec): min=3967, max=34978, avg=12854.86, stdev=4561.50 00:09:12.415 clat percentiles (usec): 00:09:12.415 | 1.00th=[ 7046], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[10552], 00:09:12.415 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:09:12.415 | 70.00th=[12780], 80.00th=[13960], 90.00th=[19006], 95.00th=[23200], 00:09:12.415 | 99.00th=[29230], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:09:12.415 | 99.99th=[34866] 00:09:12.415 write: IOPS=4622, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:09:12.415 slat (nsec): min=1717, max=10081k, avg=90513.33, stdev=472314.34 00:09:12.415 clat (usec): min=1509, max=122935, avg=14781.78, stdev=13969.16 00:09:12.415 lat (usec): min=1531, max=122947, avg=14872.30, stdev=14012.63 00:09:12.415 clat percentiles (msec): 00:09:12.415 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:09:12.415 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:09:12.415 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 22], 95.00th=[ 30], 00:09:12.415 | 99.00th=[ 105], 99.50th=[ 113], 99.90th=[ 124], 99.95th=[ 124], 00:09:12.415 | 99.99th=[ 124] 00:09:12.415 bw ( KiB/s): min=15416, max=21448, per=25.56%, avg=18432.00, stdev=4265.27, samples=2 00:09:12.415 iops : min= 3854, max= 5362, avg=4608.00, stdev=1066.32, samples=2 00:09:12.415 lat (msec) : 2=0.02%, 4=0.40%, 10=10.23%, 20=78.59%, 50=9.51% 00:09:12.415 lat (msec) : 100=0.69%, 250=0.55% 00:09:12.415 cpu : usr=2.79%, sys=4.79%, ctx=498, majf=0, minf=2 00:09:12.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:12.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.415 issued rwts: total=4608,4641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.415 00:09:12.415 Run status group 0 (all jobs): 00:09:12.415 READ: bw=67.1MiB/s (70.4MB/s), 12.4MiB/s-21.2MiB/s (13.0MB/s-22.2MB/s), io=67.7MiB (71.0MB), run=1003-1009msec 00:09:12.415 WRITE: bw=70.4MiB/s (73.8MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=71.1MiB (74.5MB), run=1003-1009msec 00:09:12.415 00:09:12.415 Disk stats (read/write): 00:09:12.415 nvme0n1: ios=2716/3072, merge=0/0, ticks=34641/66341, in_queue=100982, util=100.00% 00:09:12.415 nvme0n2: ios=4658/4803, merge=0/0, ticks=24251/22257, in_queue=46508, util=97.35% 00:09:12.415 nvme0n3: ios=3605/3711, merge=0/0, ticks=33979/38345, in_queue=72324, util=97.17% 00:09:12.415 nvme0n4: ios=3584/3713, merge=0/0, ticks=23992/35396, in_queue=59388, util=89.51% 00:09:12.415 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:12.415 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1319494 00:09:12.415 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:12.415 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:12.415 [global] 00:09:12.415 thread=1 00:09:12.415 invalidate=1 00:09:12.415 rw=read 00:09:12.415 time_based=1 00:09:12.415 runtime=10 00:09:12.415 ioengine=libaio 00:09:12.415 direct=1 00:09:12.415 bs=4096 00:09:12.415 iodepth=1 00:09:12.415 norandommap=1 00:09:12.415 numjobs=1 00:09:12.415 00:09:12.415 [job0] 00:09:12.415 filename=/dev/nvme0n1 00:09:12.415 [job1] 00:09:12.415 filename=/dev/nvme0n2 00:09:12.415 [job2] 00:09:12.415 filename=/dev/nvme0n3 00:09:12.416 [job3] 00:09:12.416 filename=/dev/nvme0n4 00:09:12.416 Could not set queue depth (nvme0n1) 00:09:12.416 Could not set queue depth (nvme0n2) 00:09:12.416 Could not set queue depth (nvme0n3) 00:09:12.416 Could not set queue depth (nvme0n4) 00:09:12.416 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.416 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.416 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.416 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.416 fio-3.35 00:09:12.416 Starting 4 threads 00:09:15.691 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:15.692 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:15.692 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9449472, buflen=4096 00:09:15.692 fio: pid=1319735, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.692 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=25796608, buflen=4096 00:09:15.692 fio: pid=1319734, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.692 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.692 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:15.692 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.692 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:15.692 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3424256, buflen=4096 00:09:15.692 fio: pid=1319732, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.950 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.950 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:15.950 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19234816, buflen=4096 00:09:15.950 fio: pid=1319733, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:15.950 00:09:15.950 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1319732: Mon Dec 9 15:02:17 2024 00:09:15.950 read: IOPS=267, BW=1069KiB/s (1095kB/s)(3344KiB/3128msec) 00:09:15.950 slat (usec): min=2, max=27690, avg=53.38, stdev=1003.44 00:09:15.950 clat (usec): min=184, max=41956, avg=3659.79, stdev=11112.36 00:09:15.950 lat (usec): min=194, max=68983, avg=3713.20, stdev=11306.01 00:09:15.950 clat percentiles (usec): 00:09:15.950 | 1.00th=[ 215], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 260], 00:09:15.950 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 351], 00:09:15.950 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 510], 95.00th=[40633], 00:09:15.950 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:15.950 | 99.99th=[42206] 00:09:15.950 bw ( KiB/s): min= 112, max= 5840, per=6.53%, avg=1108.00, stdev=2318.40, samples=6 00:09:15.950 iops : min= 28, max= 1460, avg=277.00, stdev=579.60, samples=6 00:09:15.950 lat (usec) : 250=12.54%, 500=72.28%, 750=6.81% 00:09:15.950 lat (msec) : 50=8.24% 00:09:15.950 cpu : usr=0.22%, sys=0.42%, ctx=841, majf=0, minf=1 00:09:15.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 issued rwts: total=837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.950 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1319733: Mon Dec 9 15:02:17 2024 00:09:15.950 read: IOPS=1409, BW=5637KiB/s (5773kB/s)(18.3MiB/3332msec) 00:09:15.950 slat (usec): min=2, max=15782, avg=17.90, stdev=388.64 00:09:15.950 clat (usec): min=186, max=44777, avg=685.16, stdev=4189.54 00:09:15.950 lat (usec): min=188, max=57065, avg=703.06, stdev=4302.22 00:09:15.950 clat percentiles (usec): 00:09:15.950 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:09:15.950 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:09:15.950 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:09:15.950 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:15.950 | 99.99th=[44827] 00:09:15.950 bw ( KiB/s): min= 200, max=15520, per=36.76%, avg=6239.00, stdev=7406.49, samples=6 00:09:15.950 iops : min= 50, max= 3880, avg=1559.67, stdev=1851.70, samples=6 00:09:15.950 lat (usec) : 250=54.25%, 500=44.52%, 750=0.15% 00:09:15.950 lat (msec) : 50=1.06% 00:09:15.950 cpu : usr=0.69%, sys=2.16%, ctx=4701, majf=0, minf=1 00:09:15.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 issued rwts: total=4697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.950 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1319734: Mon Dec 9 15:02:17 2024 00:09:15.950 read: IOPS=2161, BW=8645KiB/s (8853kB/s)(24.6MiB/2914msec) 00:09:15.950 slat (nsec): min=6403, max=37565, avg=7233.34, stdev=1385.29 00:09:15.950 clat (usec): min=173, max=42013, avg=450.67, stdev=2812.73 00:09:15.950 lat (usec): min=180, max=42035, avg=457.90, stdev=2813.63 00:09:15.950 clat percentiles (usec): 00:09:15.950 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 239], 00:09:15.950 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:09:15.950 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 289], 00:09:15.950 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[41157], 99.95th=[41681], 00:09:15.950 | 99.99th=[42206] 00:09:15.950 bw ( KiB/s): min= 96, max=15384, per=59.22%, avg=10051.20, stdev=7114.64, samples=5 00:09:15.950 iops : min= 24, max= 3846, avg=2512.80, stdev=1778.66, samples=5 00:09:15.950 lat (usec) : 250=44.80%, 500=54.39%, 750=0.32% 00:09:15.950 lat (msec) : 50=0.48% 00:09:15.950 cpu : usr=0.86%, sys=1.68%, ctx=6299, majf=0, minf=2 00:09:15.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 issued rwts: total=6299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.950 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1319735: Mon Dec 9 15:02:17 2024 00:09:15.950 read: IOPS=841, BW=3364KiB/s (3445kB/s)(9228KiB/2743msec) 00:09:15.950 slat (nsec): min=2464, max=38353, avg=8437.61, stdev=2532.89 00:09:15.950 clat (usec): min=189, max=42057, avg=1167.41, stdev=5628.30 00:09:15.950 lat (usec): min=196, max=42079, avg=1175.84, stdev=5629.60 00:09:15.950 clat percentiles (usec): 00:09:15.950 | 1.00th=[ 221], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 262], 00:09:15.950 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 343], 60.00th=[ 469], 00:09:15.950 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 515], 00:09:15.950 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:15.950 | 99.99th=[42206] 00:09:15.950 bw ( KiB/s): min= 136, max=10312, per=21.69%, avg=3681.60, stdev=4154.13, samples=5 00:09:15.950 iops : min= 34, max= 2578, avg=920.40, stdev=1038.53, samples=5 00:09:15.950 lat (usec) : 250=8.28%, 500=76.34%, 750=13.21%, 1000=0.13% 00:09:15.950 lat (msec) : 2=0.04%, 50=1.95% 00:09:15.950 cpu : usr=0.47%, sys=1.31%, ctx=2308, majf=0, minf=2 00:09:15.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.950 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.950 00:09:15.950 Run status group 0 (all jobs): 00:09:15.950 READ: bw=16.6MiB/s (17.4MB/s), 1069KiB/s-8645KiB/s (1095kB/s-8853kB/s), io=55.2MiB (57.9MB), run=2743-3332msec 00:09:15.950 00:09:15.950 Disk stats (read/write): 00:09:15.950 nvme0n1: ios=862/0, merge=0/0, ticks=3259/0, in_queue=3259, util=98.52% 00:09:15.950 nvme0n2: ios=4684/0, merge=0/0, ticks=2969/0, in_queue=2969, util=94.99% 00:09:15.950 nvme0n3: ios=6295/0, merge=0/0, ticks=2731/0, in_queue=2731, util=96.44% 00:09:15.950 nvme0n4: ios=2304/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.44% 00:09:16.207 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.207 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:16.465 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.465 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:16.722 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.722 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:16.722 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:16.722 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:16.979 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:16.979 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1319494 00:09:16.979 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:16.979 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:17.236 nvmf hotplug test: fio failed as expected 00:09:17.236 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.236 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:17.236 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.494 rmmod nvme_tcp 00:09:17.494 rmmod nvme_fabrics 00:09:17.494 rmmod nvme_keyring 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1316692 ']' 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1316692 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1316692 ']' 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1316692 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316692 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316692' 00:09:17.494 killing process with pid 1316692 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1316692 00:09:17.494 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1316692 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.753 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.754 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.754 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.658 00:09:19.658 real 0m26.917s 00:09:19.658 user 1m46.598s 00:09:19.658 sys 0m8.324s 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.658 ************************************ 00:09:19.658 END TEST nvmf_fio_target 00:09:19.658 ************************************ 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.658 15:02:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.918 ************************************ 00:09:19.918 START TEST nvmf_bdevio 00:09:19.918 ************************************ 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:19.918 * Looking for test storage... 00:09:19.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.918 ' 00:09:19.918 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.918 --rc genhtml_branch_coverage=1 00:09:19.918 --rc genhtml_function_coverage=1 00:09:19.918 --rc genhtml_legend=1 00:09:19.918 --rc geninfo_all_blocks=1 00:09:19.918 --rc geninfo_unexecuted_blocks=1 00:09:19.918 00:09:19.919 ' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.919 --rc genhtml_branch_coverage=1 00:09:19.919 --rc genhtml_function_coverage=1 00:09:19.919 --rc genhtml_legend=1 00:09:19.919 --rc geninfo_all_blocks=1 00:09:19.919 --rc geninfo_unexecuted_blocks=1 00:09:19.919 00:09:19.919 ' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.919 --rc genhtml_branch_coverage=1 00:09:19.919 --rc genhtml_function_coverage=1 00:09:19.919 --rc genhtml_legend=1 00:09:19.919 --rc geninfo_all_blocks=1 00:09:19.919 --rc geninfo_unexecuted_blocks=1 00:09:19.919 00:09:19.919 ' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.919 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:26.491 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:26.492 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:26.492 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:26.492 Found net devices under 0000:af:00.0: cvl_0_0 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:26.492 Found net devices under 0000:af:00.1: cvl_0_1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:09:26.492 00:09:26.492 --- 10.0.0.2 ping statistics --- 00:09:26.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.492 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:09:26.492 00:09:26.492 --- 10.0.0.1 ping statistics --- 00:09:26.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.492 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1323965 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:26.492 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1323965 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1323965 ']' 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-09 15:02:27.693851] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:09:26.493 [2024-12-09 15:02:27.693899] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.493 [2024-12-09 15:02:27.772182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.493 [2024-12-09 15:02:27.812671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.493 [2024-12-09 15:02:27.812710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.493 [2024-12-09 15:02:27.812716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.493 [2024-12-09 15:02:27.812722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.493 [2024-12-09 15:02:27.812728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.493 [2024-12-09 15:02:27.814294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:26.493 [2024-12-09 15:02:27.814403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:26.493 [2024-12-09 15:02:27.814487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.493 [2024-12-09 15:02:27.814488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-09 15:02:27.963662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 Malloc0 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.493 [2024-12-09 15:02:28.035775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.493 { 00:09:26.493 "params": { 00:09:26.493 "name": "Nvme$subsystem", 00:09:26.493 "trtype": "$TEST_TRANSPORT", 00:09:26.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.493 "adrfam": "ipv4", 00:09:26.493 "trsvcid": "$NVMF_PORT", 00:09:26.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.493 "hdgst": ${hdgst:-false}, 00:09:26.493 "ddgst": ${ddgst:-false} 00:09:26.493 }, 00:09:26.493 "method": "bdev_nvme_attach_controller" 00:09:26.493 } 00:09:26.493 EOF 00:09:26.493 )") 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:26.493 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.493 "params": { 00:09:26.493 "name": "Nvme1", 00:09:26.493 "trtype": "tcp", 00:09:26.493 "traddr": "10.0.0.2", 00:09:26.493 "adrfam": "ipv4", 00:09:26.493 "trsvcid": "4420", 00:09:26.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.493 "hdgst": false, 00:09:26.493 "ddgst": false 00:09:26.493 }, 00:09:26.493 "method": "bdev_nvme_attach_controller" 00:09:26.493 }' 00:09:26.493 [2024-12-09 15:02:28.085019] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:09:26.493 [2024-12-09 15:02:28.085061] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324176 ] 00:09:26.493 [2024-12-09 15:02:28.157558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.493 [2024-12-09 15:02:28.199950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.493 [2024-12-09 15:02:28.200059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.493 [2024-12-09 15:02:28.200059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.750 I/O targets: 00:09:26.750 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:26.750 00:09:26.750 00:09:26.750 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.750 http://cunit.sourceforge.net/ 00:09:26.750 00:09:26.750 00:09:26.750 Suite: bdevio tests on: Nvme1n1 00:09:26.750 Test: blockdev write read block ...passed 00:09:26.750 Test: blockdev write zeroes read block ...passed 00:09:26.750 Test: blockdev write zeroes read no split ...passed 00:09:26.750 Test: blockdev write zeroes read split ...passed 00:09:26.750 Test: blockdev write zeroes read split partial ...passed 00:09:26.750 Test: blockdev reset ...[2024-12-09 15:02:28.468891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:26.750 [2024-12-09 15:02:28.468957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c48b0 (9): Bad file descriptor 00:09:26.750 [2024-12-09 15:02:28.483309] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:26.750 passed 00:09:26.750 Test: blockdev write read 8 blocks ...passed 00:09:26.750 Test: blockdev write read size > 128k ...passed 00:09:26.750 Test: blockdev write read invalid size ...passed 00:09:26.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:26.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:26.750 Test: blockdev write read max offset ...passed 00:09:27.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.008 Test: blockdev writev readv 8 blocks ...passed 00:09:27.008 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.008 Test: blockdev writev readv block ...passed 00:09:27.008 Test: blockdev writev readv size > 128k ...passed 00:09:27.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.008 Test: blockdev comparev and writev ...[2024-12-09 15:02:28.695954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.695996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.008 [2024-12-09 15:02:28.696774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:27.008 passed 00:09:27.008 Test: blockdev nvme passthru rw ...passed 00:09:27.008 Test: blockdev nvme passthru vendor specific ...[2024-12-09 15:02:28.779573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.008 [2024-12-09 15:02:28.779588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.779690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.008 [2024-12-09 15:02:28.779700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.779804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.008 [2024-12-09 15:02:28.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:27.008 [2024-12-09 15:02:28.779929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.008 [2024-12-09 15:02:28.779943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:27.008 passed 00:09:27.008 Test: blockdev nvme admin passthru ...passed 00:09:27.265 Test: blockdev copy ...passed 00:09:27.266 00:09:27.266 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.266 suites 1 1 n/a 0 0 00:09:27.266 tests 23 23 23 0 0 00:09:27.266 asserts 152 152 152 0 n/a 00:09:27.266 00:09:27.266 Elapsed time = 0.982 seconds 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.266 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.266 rmmod nvme_tcp 00:09:27.266 rmmod nvme_fabrics 00:09:27.266 rmmod nvme_keyring 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1323965 ']' 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1323965 00:09:27.266 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1323965 ']' 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1323965 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323965 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323965' 00:09:27.524 killing process with pid 1323965 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1323965 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1323965 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.524 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.525 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.080 00:09:30.080 real 0m9.884s 00:09:30.080 user 0m9.400s 00:09:30.080 sys 0m4.985s 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.080 ************************************ 00:09:30.080 END TEST nvmf_bdevio 00:09:30.080 ************************************ 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:30.080 00:09:30.080 real 4m35.717s 00:09:30.080 user 10m19.834s 00:09:30.080 sys 1m36.741s 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.080 ************************************ 00:09:30.080 END TEST nvmf_target_core 00:09:30.080 ************************************ 00:09:30.080 15:02:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:30.080 15:02:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.080 15:02:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.080 15:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.080 ************************************ 00:09:30.080 START TEST nvmf_target_extra 00:09:30.080 ************************************ 00:09:30.080 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:30.080 * Looking for test storage... 00:09:30.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.081 --rc genhtml_branch_coverage=1 00:09:30.081 --rc genhtml_function_coverage=1 00:09:30.081 --rc genhtml_legend=1 00:09:30.081 --rc geninfo_all_blocks=1 00:09:30.081 --rc geninfo_unexecuted_blocks=1 00:09:30.081 00:09:30.081 ' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.081 --rc genhtml_branch_coverage=1 00:09:30.081 --rc genhtml_function_coverage=1 00:09:30.081 --rc genhtml_legend=1 00:09:30.081 --rc geninfo_all_blocks=1 00:09:30.081 --rc geninfo_unexecuted_blocks=1 00:09:30.081 00:09:30.081 ' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.081 --rc genhtml_branch_coverage=1 00:09:30.081 --rc genhtml_function_coverage=1 00:09:30.081 --rc genhtml_legend=1 00:09:30.081 --rc geninfo_all_blocks=1 00:09:30.081 --rc geninfo_unexecuted_blocks=1 00:09:30.081 00:09:30.081 ' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.081 --rc genhtml_branch_coverage=1 00:09:30.081 --rc genhtml_function_coverage=1 00:09:30.081 --rc genhtml_legend=1 00:09:30.081 --rc geninfo_all_blocks=1 00:09:30.081 --rc geninfo_unexecuted_blocks=1 00:09:30.081 00:09:30.081 ' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 ************************************ 00:09:30.081 START TEST nvmf_example 00:09:30.081 ************************************ 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:30.081 * Looking for test storage... 00:09:30.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.081 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.082 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.082 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.341 --rc genhtml_branch_coverage=1 00:09:30.341 --rc genhtml_function_coverage=1 00:09:30.341 --rc genhtml_legend=1 00:09:30.341 --rc geninfo_all_blocks=1 00:09:30.341 --rc geninfo_unexecuted_blocks=1 00:09:30.341 00:09:30.341 ' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.341 --rc genhtml_branch_coverage=1 00:09:30.341 --rc genhtml_function_coverage=1 00:09:30.341 --rc genhtml_legend=1 00:09:30.341 --rc geninfo_all_blocks=1 00:09:30.341 --rc geninfo_unexecuted_blocks=1 00:09:30.341 00:09:30.341 ' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.341 --rc genhtml_branch_coverage=1 00:09:30.341 --rc genhtml_function_coverage=1 00:09:30.341 --rc genhtml_legend=1 00:09:30.341 --rc geninfo_all_blocks=1 00:09:30.341 --rc geninfo_unexecuted_blocks=1 00:09:30.341 00:09:30.341 ' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.341 --rc genhtml_branch_coverage=1 00:09:30.341 --rc genhtml_function_coverage=1 00:09:30.341 --rc genhtml_legend=1 00:09:30.341 --rc geninfo_all_blocks=1 00:09:30.341 --rc geninfo_unexecuted_blocks=1 00:09:30.341 00:09:30.341 ' 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.341 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.342 15:02:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:37.067 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.067 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:37.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:37.068 Found net devices under 0000:af:00.0: cvl_0_0 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:37.068 Found net devices under 0000:af:00.1: cvl_0_1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:09:37.068 00:09:37.068 --- 10.0.0.2 ping statistics --- 00:09:37.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.068 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:37.068 00:09:37.068 --- 10.0.0.1 ping statistics --- 00:09:37.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.068 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1327969 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1327969 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1327969 ']' 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.068 15:02:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.068 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:37.326 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:49.515 Initializing NVMe Controllers 00:09:49.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:49.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:49.515 Initialization complete. Launching workers. 00:09:49.515 ======================================================== 00:09:49.515 Latency(us) 00:09:49.515 Device Information : IOPS MiB/s Average min max 00:09:49.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18508.65 72.30 3457.20 687.45 16021.58 00:09:49.515 ======================================================== 00:09:49.515 Total : 18508.65 72.30 3457.20 687.45 16021.58 00:09:49.515 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.515 rmmod nvme_tcp 00:09:49.515 rmmod nvme_fabrics 00:09:49.515 rmmod nvme_keyring 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1327969 ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1327969 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1327969 ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1327969 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327969 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327969' 00:09:49.515 killing process with pid 1327969 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1327969 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1327969 00:09:49.515 nvmf threads initialize successfully 00:09:49.515 bdev subsystem init successfully 00:09:49.515 created a nvmf target service 00:09:49.515 create targets's poll groups done 00:09:49.515 all subsystems of target started 00:09:49.515 nvmf target is running 00:09:49.515 all subsystems of target stopped 00:09:49.515 destroy targets's poll groups done 00:09:49.515 destroyed the nvmf target service 00:09:49.515 bdev subsystem finish successfully 00:09:49.515 nvmf threads destroy successfully 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.515 15:02:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 00:09:49.773 real 0m19.816s 00:09:49.773 user 0m45.936s 00:09:49.773 sys 0m6.124s 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.773 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 ************************************ 00:09:49.773 END TEST nvmf_example 00:09:49.773 ************************************ 00:09:50.033 15:02:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:50.033 15:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.033 15:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.034 ************************************ 00:09:50.034 START TEST nvmf_filesystem 00:09:50.034 ************************************ 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:50.034 * Looking for test storage... 00:09:50.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.034 --rc genhtml_branch_coverage=1 00:09:50.034 --rc genhtml_function_coverage=1 00:09:50.034 --rc genhtml_legend=1 00:09:50.034 --rc geninfo_all_blocks=1 00:09:50.034 --rc geninfo_unexecuted_blocks=1 00:09:50.034 00:09:50.034 ' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.034 --rc genhtml_branch_coverage=1 00:09:50.034 --rc genhtml_function_coverage=1 00:09:50.034 --rc genhtml_legend=1 00:09:50.034 --rc geninfo_all_blocks=1 00:09:50.034 --rc geninfo_unexecuted_blocks=1 00:09:50.034 00:09:50.034 ' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.034 --rc genhtml_branch_coverage=1 00:09:50.034 --rc genhtml_function_coverage=1 00:09:50.034 --rc genhtml_legend=1 00:09:50.034 --rc geninfo_all_blocks=1 00:09:50.034 --rc geninfo_unexecuted_blocks=1 00:09:50.034 00:09:50.034 ' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.034 --rc genhtml_branch_coverage=1 00:09:50.034 --rc genhtml_function_coverage=1 00:09:50.034 --rc genhtml_legend=1 00:09:50.034 --rc geninfo_all_blocks=1 00:09:50.034 --rc geninfo_unexecuted_blocks=1 00:09:50.034 00:09:50.034 ' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:50.034 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:50.035 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:50.035 #define SPDK_CONFIG_H 00:09:50.035 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:50.035 #define SPDK_CONFIG_APPS 1 00:09:50.035 #define SPDK_CONFIG_ARCH native 00:09:50.035 #undef SPDK_CONFIG_ASAN 00:09:50.035 #undef SPDK_CONFIG_AVAHI 00:09:50.035 #undef SPDK_CONFIG_CET 00:09:50.035 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:50.035 #define SPDK_CONFIG_COVERAGE 1 00:09:50.035 #define SPDK_CONFIG_CROSS_PREFIX 00:09:50.035 #undef SPDK_CONFIG_CRYPTO 00:09:50.035 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:50.035 #undef SPDK_CONFIG_CUSTOMOCF 00:09:50.035 #undef SPDK_CONFIG_DAOS 00:09:50.035 #define SPDK_CONFIG_DAOS_DIR 00:09:50.035 #define SPDK_CONFIG_DEBUG 1 00:09:50.035 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:50.035 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:50.035 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:50.035 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:50.035 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:50.035 #undef SPDK_CONFIG_DPDK_UADK 00:09:50.035 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:50.035 #define SPDK_CONFIG_EXAMPLES 1 00:09:50.035 #undef SPDK_CONFIG_FC 00:09:50.035 #define SPDK_CONFIG_FC_PATH 00:09:50.035 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:50.035 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:50.035 #define SPDK_CONFIG_FSDEV 1 00:09:50.035 #undef SPDK_CONFIG_FUSE 00:09:50.035 #undef SPDK_CONFIG_FUZZER 00:09:50.035 #define SPDK_CONFIG_FUZZER_LIB 00:09:50.035 #undef SPDK_CONFIG_GOLANG 00:09:50.035 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:50.035 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:50.035 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:50.035 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:50.035 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:50.035 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:50.035 #undef SPDK_CONFIG_HAVE_LZ4 00:09:50.035 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:50.035 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:50.035 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:50.035 #define SPDK_CONFIG_IDXD 1 00:09:50.035 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:50.035 #undef SPDK_CONFIG_IPSEC_MB 00:09:50.035 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:50.035 #define SPDK_CONFIG_ISAL 1 00:09:50.035 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:50.035 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:50.035 #define SPDK_CONFIG_LIBDIR 00:09:50.035 #undef SPDK_CONFIG_LTO 00:09:50.035 #define SPDK_CONFIG_MAX_LCORES 128 00:09:50.035 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:50.035 #define SPDK_CONFIG_NVME_CUSE 1 00:09:50.035 #undef SPDK_CONFIG_OCF 00:09:50.035 #define SPDK_CONFIG_OCF_PATH 00:09:50.035 #define SPDK_CONFIG_OPENSSL_PATH 00:09:50.035 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:50.035 #define SPDK_CONFIG_PGO_DIR 00:09:50.035 #undef SPDK_CONFIG_PGO_USE 00:09:50.035 #define SPDK_CONFIG_PREFIX /usr/local 00:09:50.035 #undef SPDK_CONFIG_RAID5F 00:09:50.035 #undef SPDK_CONFIG_RBD 00:09:50.035 #define SPDK_CONFIG_RDMA 1 00:09:50.035 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:50.035 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:50.035 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:50.035 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:50.035 #define SPDK_CONFIG_SHARED 1 00:09:50.035 #undef SPDK_CONFIG_SMA 00:09:50.035 #define SPDK_CONFIG_TESTS 1 00:09:50.035 #undef SPDK_CONFIG_TSAN 00:09:50.035 #define SPDK_CONFIG_UBLK 1 00:09:50.035 #define SPDK_CONFIG_UBSAN 1 00:09:50.035 #undef SPDK_CONFIG_UNIT_TESTS 00:09:50.035 #undef SPDK_CONFIG_URING 00:09:50.035 #define SPDK_CONFIG_URING_PATH 00:09:50.035 #undef SPDK_CONFIG_URING_ZNS 00:09:50.035 #undef SPDK_CONFIG_USDT 00:09:50.035 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:50.035 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:50.035 #define SPDK_CONFIG_VFIO_USER 1 00:09:50.035 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:50.035 #define SPDK_CONFIG_VHOST 1 00:09:50.035 #define SPDK_CONFIG_VIRTIO 1 00:09:50.035 #undef SPDK_CONFIG_VTUNE 00:09:50.035 #define SPDK_CONFIG_VTUNE_DIR 00:09:50.035 #define SPDK_CONFIG_WERROR 1 00:09:50.035 #define SPDK_CONFIG_WPDK_DIR 00:09:50.035 #undef SPDK_CONFIG_XNVME 00:09:50.035 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:50.036 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:50.298 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:50.299 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1330341 ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1330341 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.1RAp4q 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1RAp4q/tests/target /tmp/spdk.1RAp4q 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93829271552 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7007928320 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144431104 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.300 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418376704 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=225280 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:50.301 * Looking for test storage... 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93829271552 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9222520832 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.301 15:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.301 --rc genhtml_branch_coverage=1 00:09:50.301 --rc genhtml_function_coverage=1 00:09:50.301 --rc genhtml_legend=1 00:09:50.301 --rc geninfo_all_blocks=1 00:09:50.301 --rc geninfo_unexecuted_blocks=1 00:09:50.301 00:09:50.301 ' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.301 --rc genhtml_branch_coverage=1 00:09:50.301 --rc genhtml_function_coverage=1 00:09:50.301 --rc genhtml_legend=1 00:09:50.301 --rc geninfo_all_blocks=1 00:09:50.301 --rc geninfo_unexecuted_blocks=1 00:09:50.301 00:09:50.301 ' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.301 --rc genhtml_branch_coverage=1 00:09:50.301 --rc genhtml_function_coverage=1 00:09:50.301 --rc genhtml_legend=1 00:09:50.301 --rc geninfo_all_blocks=1 00:09:50.301 --rc geninfo_unexecuted_blocks=1 00:09:50.301 00:09:50.301 ' 00:09:50.301 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.301 --rc genhtml_branch_coverage=1 00:09:50.301 --rc genhtml_function_coverage=1 00:09:50.301 --rc genhtml_legend=1 00:09:50.301 --rc geninfo_all_blocks=1 00:09:50.301 --rc geninfo_unexecuted_blocks=1 00:09:50.301 00:09:50.302 ' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.302 15:02:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.870 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.870 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.870 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.871 15:02:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:09:56.871 00:09:56.871 --- 10.0.0.2 ping statistics --- 00:09:56.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.871 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:09:56.871 00:09:56.871 --- 10.0.0.1 ping statistics --- 00:09:56.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.871 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 ************************************ 00:09:56.871 START TEST nvmf_filesystem_no_in_capsule 00:09:56.871 ************************************ 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1333362 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1333362 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1333362 ']' 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 [2024-12-09 15:02:58.248433] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:09:56.871 [2024-12-09 15:02:58.248475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.871 [2024-12-09 15:02:58.323530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.871 [2024-12-09 15:02:58.365760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.871 [2024-12-09 15:02:58.365799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.871 [2024-12-09 15:02:58.365808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.871 [2024-12-09 15:02:58.365816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.871 [2024-12-09 15:02:58.365824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.871 [2024-12-09 15:02:58.367253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.871 [2024-12-09 15:02:58.367370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.871 [2024-12-09 15:02:58.367474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.871 [2024-12-09 15:02:58.367474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 [2024-12-09 15:02:58.516808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 Malloc1 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.871 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.129 [2024-12-09 15:02:58.671438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:57.129 { 00:09:57.129 "name": "Malloc1", 00:09:57.129 "aliases": [ 00:09:57.129 "5f3715f9-f2e4-4d42-9f4e-a1494ce4e525" 00:09:57.129 ], 00:09:57.129 "product_name": "Malloc disk", 00:09:57.129 "block_size": 512, 00:09:57.129 "num_blocks": 1048576, 00:09:57.129 "uuid": "5f3715f9-f2e4-4d42-9f4e-a1494ce4e525", 00:09:57.129 "assigned_rate_limits": { 00:09:57.129 "rw_ios_per_sec": 0, 00:09:57.129 "rw_mbytes_per_sec": 0, 00:09:57.129 "r_mbytes_per_sec": 0, 00:09:57.129 "w_mbytes_per_sec": 0 00:09:57.129 }, 00:09:57.129 "claimed": true, 00:09:57.129 "claim_type": "exclusive_write", 00:09:57.129 "zoned": false, 00:09:57.129 "supported_io_types": { 00:09:57.129 "read": true, 00:09:57.129 "write": true, 00:09:57.129 "unmap": true, 00:09:57.129 "flush": true, 00:09:57.129 "reset": true, 00:09:57.129 "nvme_admin": false, 00:09:57.129 "nvme_io": false, 00:09:57.129 "nvme_io_md": false, 00:09:57.129 "write_zeroes": true, 00:09:57.129 "zcopy": true, 00:09:57.129 "get_zone_info": false, 00:09:57.129 "zone_management": false, 00:09:57.129 "zone_append": false, 00:09:57.129 "compare": false, 00:09:57.129 "compare_and_write": false, 00:09:57.129 "abort": true, 00:09:57.129 "seek_hole": false, 00:09:57.129 "seek_data": false, 00:09:57.129 "copy": true, 00:09:57.129 "nvme_iov_md": false 00:09:57.129 }, 00:09:57.129 "memory_domains": [ 00:09:57.129 { 00:09:57.129 "dma_device_id": "system", 00:09:57.129 "dma_device_type": 1 00:09:57.129 }, 00:09:57.129 { 00:09:57.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.129 "dma_device_type": 2 00:09:57.129 } 00:09:57.129 ], 00:09:57.129 "driver_specific": {} 00:09:57.129 } 00:09:57.129 ]' 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:57.129 15:02:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.498 15:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.498 15:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.498 15:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.498 15:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.498 15:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.391 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.391 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.391 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.391 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:00.392 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.392 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:00.392 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:00.392 15:03:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:00.392 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:00.956 15:03:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.886 ************************************ 00:10:01.886 START TEST filesystem_ext4 00:10:01.886 ************************************ 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:01.886 15:03:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:01.886 mke2fs 1.47.0 (5-Feb-2023) 00:10:01.886 Discarding device blocks: 0/522240 done 00:10:01.886 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:01.886 Filesystem UUID: fcb8e82a-2904-4bce-a221-1a3578f91ef2 00:10:01.886 Superblock backups stored on blocks: 00:10:01.886 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:01.886 00:10:01.886 Allocating group tables: 0/64 done 00:10:01.886 Writing inode tables: 0/64 done 00:10:02.450 Creating journal (8192 blocks): done 00:10:04.641 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:10:04.641 00:10:04.641 15:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:04.641 15:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1333362 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.195 00:10:11.195 real 0m9.019s 00:10:11.195 user 0m0.034s 00:10:11.195 sys 0m0.067s 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:11.195 ************************************ 00:10:11.195 END TEST filesystem_ext4 00:10:11.195 ************************************ 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.195 ************************************ 00:10:11.195 START TEST filesystem_btrfs 00:10:11.195 ************************************ 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:11.195 15:03:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:11.453 btrfs-progs v6.8.1 00:10:11.453 See https://btrfs.readthedocs.io for more information. 00:10:11.453 00:10:11.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:11.453 NOTE: several default settings have changed in version 5.15, please make sure 00:10:11.453 this does not affect your deployments: 00:10:11.453 - DUP for metadata (-m dup) 00:10:11.453 - enabled no-holes (-O no-holes) 00:10:11.453 - enabled free-space-tree (-R free-space-tree) 00:10:11.453 00:10:11.453 Label: (null) 00:10:11.453 UUID: 55de354f-0475-4e00-b169-f3fbb09dfa73 00:10:11.453 Node size: 16384 00:10:11.453 Sector size: 4096 (CPU page size: 4096) 00:10:11.453 Filesystem size: 510.00MiB 00:10:11.453 Block group profiles: 00:10:11.453 Data: single 8.00MiB 00:10:11.453 Metadata: DUP 32.00MiB 00:10:11.453 System: DUP 8.00MiB 00:10:11.453 SSD detected: yes 00:10:11.453 Zoned device: no 00:10:11.453 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:11.453 Checksum: crc32c 00:10:11.453 Number of devices: 1 00:10:11.453 Devices: 00:10:11.453 ID SIZE PATH 00:10:11.453 1 510.00MiB /dev/nvme0n1p1 00:10:11.453 00:10:11.453 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:11.453 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1333362 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.017 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.018 00:10:12.018 real 0m1.151s 00:10:12.018 user 0m0.029s 00:10:12.018 sys 0m0.111s 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.018 ************************************ 00:10:12.018 END TEST filesystem_btrfs 00:10:12.018 ************************************ 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.018 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 ************************************ 00:10:12.277 START TEST filesystem_xfs 00:10:12.277 ************************************ 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:12.277 15:03:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:12.277 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:12.277 = sectsz=512 attr=2, projid32bit=1 00:10:12.277 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:12.277 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:12.277 data = bsize=4096 blocks=130560, imaxpct=25 00:10:12.277 = sunit=0 swidth=0 blks 00:10:12.277 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:12.277 log =internal log bsize=4096 blocks=16384, version=2 00:10:12.277 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:12.277 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:13.209 Discarding blocks...Done. 00:10:13.209 15:03:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:13.209 15:03:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1333362 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.105 00:10:15.105 real 0m2.645s 00:10:15.105 user 0m0.023s 00:10:15.105 sys 0m0.075s 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.105 ************************************ 00:10:15.105 END TEST filesystem_xfs 00:10:15.105 ************************************ 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:15.105 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1333362 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1333362 ']' 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1333362 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.363 15:03:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1333362 00:10:15.363 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.363 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.363 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1333362' 00:10:15.363 killing process with pid 1333362 00:10:15.363 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1333362 00:10:15.363 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1333362 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:15.623 00:10:15.623 real 0m19.131s 00:10:15.623 user 1m15.374s 00:10:15.623 sys 0m1.404s 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 ************************************ 00:10:15.623 END TEST nvmf_filesystem_no_in_capsule 00:10:15.623 ************************************ 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 ************************************ 00:10:15.623 START TEST nvmf_filesystem_in_capsule 00:10:15.623 ************************************ 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1337280 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1337280 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1337280 ']' 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.623 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.881 [2024-12-09 15:03:17.455730] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:10:15.881 [2024-12-09 15:03:17.455779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.881 [2024-12-09 15:03:17.533492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.881 [2024-12-09 15:03:17.574223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.881 [2024-12-09 15:03:17.574262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.881 [2024-12-09 15:03:17.574272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.881 [2024-12-09 15:03:17.574280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.881 [2024-12-09 15:03:17.574289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.881 [2024-12-09 15:03:17.575711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.881 [2024-12-09 15:03:17.575821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.881 [2024-12-09 15:03:17.575932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.881 [2024-12-09 15:03:17.575932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.881 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.881 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:15.881 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.881 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.881 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 [2024-12-09 15:03:17.709803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 [2024-12-09 15:03:17.872391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:16.139 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.140 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.140 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.140 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.140 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:16.140 { 00:10:16.140 "name": "Malloc1", 00:10:16.140 "aliases": [ 00:10:16.140 "07265804-d12f-43fe-baf7-1c22b11e2160" 00:10:16.140 ], 00:10:16.140 "product_name": "Malloc disk", 00:10:16.140 "block_size": 512, 00:10:16.140 "num_blocks": 1048576, 00:10:16.140 "uuid": "07265804-d12f-43fe-baf7-1c22b11e2160", 00:10:16.140 "assigned_rate_limits": { 00:10:16.140 "rw_ios_per_sec": 0, 00:10:16.140 "rw_mbytes_per_sec": 0, 00:10:16.140 "r_mbytes_per_sec": 0, 00:10:16.140 "w_mbytes_per_sec": 0 00:10:16.140 }, 00:10:16.140 "claimed": true, 00:10:16.140 "claim_type": "exclusive_write", 00:10:16.140 "zoned": false, 00:10:16.140 "supported_io_types": { 00:10:16.140 "read": true, 00:10:16.140 "write": true, 00:10:16.140 "unmap": true, 00:10:16.140 "flush": true, 00:10:16.140 "reset": true, 00:10:16.140 "nvme_admin": false, 00:10:16.140 "nvme_io": false, 00:10:16.140 "nvme_io_md": false, 00:10:16.140 "write_zeroes": true, 00:10:16.140 "zcopy": true, 00:10:16.140 "get_zone_info": false, 00:10:16.140 "zone_management": false, 00:10:16.140 "zone_append": false, 00:10:16.140 "compare": false, 00:10:16.140 "compare_and_write": false, 00:10:16.140 "abort": true, 00:10:16.140 "seek_hole": false, 00:10:16.140 "seek_data": false, 00:10:16.140 "copy": true, 00:10:16.140 "nvme_iov_md": false 00:10:16.140 }, 00:10:16.140 "memory_domains": [ 00:10:16.140 { 00:10:16.140 "dma_device_id": "system", 00:10:16.140 "dma_device_type": 1 00:10:16.140 }, 00:10:16.140 { 00:10:16.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.140 "dma_device_type": 2 00:10:16.140 } 00:10:16.140 ], 00:10:16.140 "driver_specific": {} 00:10:16.140 } 00:10:16.140 ]' 00:10:16.140 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.397 15:03:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.330 15:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.330 15:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:17.330 15:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.330 15:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:17.330 15:03:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.925 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:19.926 15:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.296 ************************************ 00:10:21.296 START TEST filesystem_in_capsule_ext4 00:10:21.296 ************************************ 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:21.296 15:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:21.296 mke2fs 1.47.0 (5-Feb-2023) 00:10:21.296 Discarding device blocks: 0/522240 done 00:10:21.296 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:21.296 Filesystem UUID: 775db1a8-ceb6-4ee9-9070-df1795733af5 00:10:21.296 Superblock backups stored on blocks: 00:10:21.296 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:21.296 00:10:21.296 Allocating group tables: 0/64 done 00:10:21.296 Writing inode tables: 0/64 done 00:10:21.554 Creating journal (8192 blocks): done 00:10:21.554 Writing superblocks and filesystem accounting information: 0/64 done 00:10:21.554 00:10:21.554 15:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:21.554 15:03:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1337280 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.815 00:10:26.815 real 0m5.891s 00:10:26.815 user 0m0.022s 00:10:26.815 sys 0m0.075s 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.815 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:26.815 ************************************ 00:10:26.815 END TEST filesystem_in_capsule_ext4 00:10:26.815 ************************************ 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.074 ************************************ 00:10:27.074 START TEST filesystem_in_capsule_btrfs 00:10:27.074 ************************************ 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.074 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:27.074 btrfs-progs v6.8.1 00:10:27.074 See https://btrfs.readthedocs.io for more information. 00:10:27.074 00:10:27.074 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:27.074 NOTE: several default settings have changed in version 5.15, please make sure 00:10:27.074 this does not affect your deployments: 00:10:27.074 - DUP for metadata (-m dup) 00:10:27.074 - enabled no-holes (-O no-holes) 00:10:27.074 - enabled free-space-tree (-R free-space-tree) 00:10:27.074 00:10:27.074 Label: (null) 00:10:27.074 UUID: 716215f6-fecf-431e-a41d-6439db6e006f 00:10:27.074 Node size: 16384 00:10:27.074 Sector size: 4096 (CPU page size: 4096) 00:10:27.074 Filesystem size: 510.00MiB 00:10:27.074 Block group profiles: 00:10:27.074 Data: single 8.00MiB 00:10:27.074 Metadata: DUP 32.00MiB 00:10:27.074 System: DUP 8.00MiB 00:10:27.074 SSD detected: yes 00:10:27.074 Zoned device: no 00:10:27.074 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:27.074 Checksum: crc32c 00:10:27.074 Number of devices: 1 00:10:27.074 Devices: 00:10:27.074 ID SIZE PATH 00:10:27.074 1 510.00MiB /dev/nvme0n1p1 00:10:27.074 00:10:27.332 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:27.332 15:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:27.590 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1337280 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.849 00:10:27.849 real 0m0.753s 00:10:27.849 user 0m0.028s 00:10:27.849 sys 0m0.110s 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.849 ************************************ 00:10:27.849 END TEST filesystem_in_capsule_btrfs 00:10:27.849 ************************************ 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.849 ************************************ 00:10:27.849 START TEST filesystem_in_capsule_xfs 00:10:27.849 ************************************ 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.849 15:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:27.849 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:27.849 = sectsz=512 attr=2, projid32bit=1 00:10:27.849 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:27.849 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:27.849 data = bsize=4096 blocks=130560, imaxpct=25 00:10:27.849 = sunit=0 swidth=0 blks 00:10:27.849 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:27.850 log =internal log bsize=4096 blocks=16384, version=2 00:10:27.850 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:27.850 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:28.785 Discarding blocks...Done. 00:10:28.785 15:03:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.785 15:03:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.684 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.684 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.684 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1337280 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.685 00:10:30.685 real 0m2.879s 00:10:30.685 user 0m0.020s 00:10:30.685 sys 0m0.077s 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.685 ************************************ 00:10:30.685 END TEST filesystem_in_capsule_xfs 00:10:30.685 ************************************ 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:30.685 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1337280 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1337280 ']' 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1337280 00:10:30.943 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1337280 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1337280' 00:10:30.944 killing process with pid 1337280 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1337280 00:10:30.944 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1337280 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.203 00:10:31.203 real 0m15.534s 00:10:31.203 user 1m1.069s 00:10:31.203 sys 0m1.367s 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.203 ************************************ 00:10:31.203 END TEST nvmf_filesystem_in_capsule 00:10:31.203 ************************************ 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.203 15:03:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.203 rmmod nvme_tcp 00:10:31.203 rmmod nvme_fabrics 00:10:31.463 rmmod nvme_keyring 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.463 15:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.371 00:10:33.371 real 0m43.508s 00:10:33.371 user 2m18.501s 00:10:33.371 sys 0m7.486s 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.371 ************************************ 00:10:33.371 END TEST nvmf_filesystem 00:10:33.371 ************************************ 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.371 15:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.630 ************************************ 00:10:33.630 START TEST nvmf_target_discovery 00:10:33.630 ************************************ 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:33.630 * Looking for test storage... 00:10:33.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.630 --rc genhtml_branch_coverage=1 00:10:33.630 --rc genhtml_function_coverage=1 00:10:33.630 --rc genhtml_legend=1 00:10:33.630 --rc geninfo_all_blocks=1 00:10:33.630 --rc geninfo_unexecuted_blocks=1 00:10:33.630 00:10:33.630 ' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.630 --rc genhtml_branch_coverage=1 00:10:33.630 --rc genhtml_function_coverage=1 00:10:33.630 --rc genhtml_legend=1 00:10:33.630 --rc geninfo_all_blocks=1 00:10:33.630 --rc geninfo_unexecuted_blocks=1 00:10:33.630 00:10:33.630 ' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.630 --rc genhtml_branch_coverage=1 00:10:33.630 --rc genhtml_function_coverage=1 00:10:33.630 --rc genhtml_legend=1 00:10:33.630 --rc geninfo_all_blocks=1 00:10:33.630 --rc geninfo_unexecuted_blocks=1 00:10:33.630 00:10:33.630 ' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.630 --rc genhtml_branch_coverage=1 00:10:33.630 --rc genhtml_function_coverage=1 00:10:33.630 --rc genhtml_legend=1 00:10:33.630 --rc geninfo_all_blocks=1 00:10:33.630 --rc geninfo_unexecuted_blocks=1 00:10:33.630 00:10:33.630 ' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.630 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.631 15:03:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.203 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:40.204 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:40.204 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:40.204 Found net devices under 0000:af:00.0: cvl_0_0 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:40.204 Found net devices under 0000:af:00.1: cvl_0_1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:10:40.204 00:10:40.204 --- 10.0.0.2 ping statistics --- 00:10:40.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.204 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:10:40.204 00:10:40.204 --- 10.0.0.1 ping statistics --- 00:10:40.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.204 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.204 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1343487 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1343487 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1343487 ']' 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.205 15:03:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.205 [2024-12-09 15:03:41.393286] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:10:40.205 [2024-12-09 15:03:41.393332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.205 [2024-12-09 15:03:41.469141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.205 [2024-12-09 15:03:41.513524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.205 [2024-12-09 15:03:41.513557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.205 [2024-12-09 15:03:41.513567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.205 [2024-12-09 15:03:41.513575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.205 [2024-12-09 15:03:41.513583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.205 [2024-12-09 15:03:41.515056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.205 [2024-12-09 15:03:41.515193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.205 [2024-12-09 15:03:41.518236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.205 [2024-12-09 15:03:41.518240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.473 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.473 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:40.473 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.473 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.473 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.734 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.734 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.734 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.734 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.734 [2024-12-09 15:03:42.276980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 Null1 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 [2024-12-09 15:03:42.332357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 Null2 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 Null3 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 Null4 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.735 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:40.994 00:10:40.994 Discovery Log Number of Records 6, Generation counter 6 00:10:40.994 =====Discovery Log Entry 0====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: current discovery subsystem 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4420 00:10:40.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: explicit discovery connections, duplicate discovery information 00:10:40.994 sectype: none 00:10:40.994 =====Discovery Log Entry 1====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: nvme subsystem 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4420 00:10:40.994 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: none 00:10:40.994 sectype: none 00:10:40.994 =====Discovery Log Entry 2====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: nvme subsystem 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4420 00:10:40.994 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: none 00:10:40.994 sectype: none 00:10:40.994 =====Discovery Log Entry 3====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: nvme subsystem 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4420 00:10:40.994 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: none 00:10:40.994 sectype: none 00:10:40.994 =====Discovery Log Entry 4====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: nvme subsystem 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4420 00:10:40.994 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: none 00:10:40.994 sectype: none 00:10:40.994 =====Discovery Log Entry 5====== 00:10:40.994 trtype: tcp 00:10:40.994 adrfam: ipv4 00:10:40.994 subtype: discovery subsystem referral 00:10:40.994 treq: not required 00:10:40.994 portid: 0 00:10:40.994 trsvcid: 4430 00:10:40.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.994 traddr: 10.0.0.2 00:10:40.994 eflags: none 00:10:40.994 sectype: none 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:40.994 Perform nvmf subsystem discovery via RPC 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.994 [ 00:10:40.994 { 00:10:40.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:40.994 "subtype": "Discovery", 00:10:40.994 "listen_addresses": [ 00:10:40.994 { 00:10:40.994 "trtype": "TCP", 00:10:40.994 "adrfam": "IPv4", 00:10:40.994 "traddr": "10.0.0.2", 00:10:40.994 "trsvcid": "4420" 00:10:40.994 } 00:10:40.994 ], 00:10:40.994 "allow_any_host": true, 00:10:40.994 "hosts": [] 00:10:40.994 }, 00:10:40.994 { 00:10:40.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.994 "subtype": "NVMe", 00:10:40.994 "listen_addresses": [ 00:10:40.994 { 00:10:40.994 "trtype": "TCP", 00:10:40.994 "adrfam": "IPv4", 00:10:40.994 "traddr": "10.0.0.2", 00:10:40.994 "trsvcid": "4420" 00:10:40.994 } 00:10:40.994 ], 00:10:40.994 "allow_any_host": true, 00:10:40.994 "hosts": [], 00:10:40.994 "serial_number": "SPDK00000000000001", 00:10:40.994 "model_number": "SPDK bdev Controller", 00:10:40.994 "max_namespaces": 32, 00:10:40.994 "min_cntlid": 1, 00:10:40.994 "max_cntlid": 65519, 00:10:40.994 "namespaces": [ 00:10:40.994 { 00:10:40.994 "nsid": 1, 00:10:40.994 "bdev_name": "Null1", 00:10:40.994 "name": "Null1", 00:10:40.994 "nguid": "0985603E726C4835ADC8A59E8AACE036", 00:10:40.994 "uuid": "0985603e-726c-4835-adc8-a59e8aace036" 00:10:40.994 } 00:10:40.994 ] 00:10:40.994 }, 00:10:40.994 { 00:10:40.994 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.994 "subtype": "NVMe", 00:10:40.994 "listen_addresses": [ 00:10:40.994 { 00:10:40.994 "trtype": "TCP", 00:10:40.994 "adrfam": "IPv4", 00:10:40.994 "traddr": "10.0.0.2", 00:10:40.994 "trsvcid": "4420" 00:10:40.994 } 00:10:40.994 ], 00:10:40.994 "allow_any_host": true, 00:10:40.994 "hosts": [], 00:10:40.994 "serial_number": "SPDK00000000000002", 00:10:40.994 "model_number": "SPDK bdev Controller", 00:10:40.994 "max_namespaces": 32, 00:10:40.994 "min_cntlid": 1, 00:10:40.994 "max_cntlid": 65519, 00:10:40.994 "namespaces": [ 00:10:40.994 { 00:10:40.994 "nsid": 1, 00:10:40.994 "bdev_name": "Null2", 00:10:40.994 "name": "Null2", 00:10:40.994 "nguid": "F1A0E2D3CBFD4139B6C3B0617E6EC323", 00:10:40.994 "uuid": "f1a0e2d3-cbfd-4139-b6c3-b0617e6ec323" 00:10:40.994 } 00:10:40.994 ] 00:10:40.994 }, 00:10:40.994 { 00:10:40.994 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:40.994 "subtype": "NVMe", 00:10:40.994 "listen_addresses": [ 00:10:40.994 { 00:10:40.994 "trtype": "TCP", 00:10:40.994 "adrfam": "IPv4", 00:10:40.994 "traddr": "10.0.0.2", 00:10:40.994 "trsvcid": "4420" 00:10:40.994 } 00:10:40.994 ], 00:10:40.994 "allow_any_host": true, 00:10:40.994 "hosts": [], 00:10:40.994 "serial_number": "SPDK00000000000003", 00:10:40.994 "model_number": "SPDK bdev Controller", 00:10:40.994 "max_namespaces": 32, 00:10:40.994 "min_cntlid": 1, 00:10:40.994 "max_cntlid": 65519, 00:10:40.994 "namespaces": [ 00:10:40.994 { 00:10:40.994 "nsid": 1, 00:10:40.994 "bdev_name": "Null3", 00:10:40.994 "name": "Null3", 00:10:40.994 "nguid": "E23400EC533B462EA494760B42E9FDD4", 00:10:40.994 "uuid": "e23400ec-533b-462e-a494-760b42e9fdd4" 00:10:40.994 } 00:10:40.994 ] 00:10:40.994 }, 00:10:40.994 { 00:10:40.994 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:40.994 "subtype": "NVMe", 00:10:40.994 "listen_addresses": [ 00:10:40.994 { 00:10:40.994 "trtype": "TCP", 00:10:40.994 "adrfam": "IPv4", 00:10:40.994 "traddr": "10.0.0.2", 00:10:40.994 "trsvcid": "4420" 00:10:40.994 } 00:10:40.994 ], 00:10:40.994 "allow_any_host": true, 00:10:40.994 "hosts": [], 00:10:40.994 "serial_number": "SPDK00000000000004", 00:10:40.994 "model_number": "SPDK bdev Controller", 00:10:40.994 "max_namespaces": 32, 00:10:40.994 "min_cntlid": 1, 00:10:40.994 "max_cntlid": 65519, 00:10:40.994 "namespaces": [ 00:10:40.994 { 00:10:40.994 "nsid": 1, 00:10:40.994 "bdev_name": "Null4", 00:10:40.994 "name": "Null4", 00:10:40.994 "nguid": "F7BF8D7477284721B8980F8B8B7D122C", 00:10:40.994 "uuid": "f7bf8d74-7728-4721-b898-0f8b8b7d122c" 00:10:40.994 } 00:10:40.994 ] 00:10:40.994 } 00:10:40.994 ] 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.994 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.995 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.254 rmmod nvme_tcp 00:10:41.254 rmmod nvme_fabrics 00:10:41.254 rmmod nvme_keyring 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1343487 ']' 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1343487 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1343487 ']' 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1343487 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1343487 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1343487' 00:10:41.254 killing process with pid 1343487 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1343487 00:10:41.254 15:03:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1343487 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.513 15:03:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.416 00:10:43.416 real 0m9.959s 00:10:43.416 user 0m8.288s 00:10:43.416 sys 0m4.802s 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.416 ************************************ 00:10:43.416 END TEST nvmf_target_discovery 00:10:43.416 ************************************ 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.416 15:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.676 ************************************ 00:10:43.676 START TEST nvmf_referrals 00:10:43.676 ************************************ 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.676 * Looking for test storage... 00:10:43.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.676 --rc genhtml_branch_coverage=1 00:10:43.676 --rc genhtml_function_coverage=1 00:10:43.676 --rc genhtml_legend=1 00:10:43.676 --rc geninfo_all_blocks=1 00:10:43.676 --rc geninfo_unexecuted_blocks=1 00:10:43.676 00:10:43.676 ' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.676 --rc genhtml_branch_coverage=1 00:10:43.676 --rc genhtml_function_coverage=1 00:10:43.676 --rc genhtml_legend=1 00:10:43.676 --rc geninfo_all_blocks=1 00:10:43.676 --rc geninfo_unexecuted_blocks=1 00:10:43.676 00:10:43.676 ' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.676 --rc genhtml_branch_coverage=1 00:10:43.676 --rc genhtml_function_coverage=1 00:10:43.676 --rc genhtml_legend=1 00:10:43.676 --rc geninfo_all_blocks=1 00:10:43.676 --rc geninfo_unexecuted_blocks=1 00:10:43.676 00:10:43.676 ' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:43.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.676 --rc genhtml_branch_coverage=1 00:10:43.676 --rc genhtml_function_coverage=1 00:10:43.676 --rc genhtml_legend=1 00:10:43.676 --rc geninfo_all_blocks=1 00:10:43.676 --rc geninfo_unexecuted_blocks=1 00:10:43.676 00:10:43.676 ' 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.676 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.677 15:03:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.250 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.250 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.250 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.250 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:10:50.250 00:10:50.250 --- 10.0.0.2 ping statistics --- 00:10:50.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.250 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:50.250 00:10:50.250 --- 10.0.0.1 ping statistics --- 00:10:50.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.250 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:50.250 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1347294 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1347294 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1347294 ']' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 [2024-12-09 15:03:51.477664] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:10:50.251 [2024-12-09 15:03:51.477707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.251 [2024-12-09 15:03:51.555814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.251 [2024-12-09 15:03:51.595725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.251 [2024-12-09 15:03:51.595764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.251 [2024-12-09 15:03:51.595773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.251 [2024-12-09 15:03:51.595779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.251 [2024-12-09 15:03:51.595784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.251 [2024-12-09 15:03:51.597257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.251 [2024-12-09 15:03:51.597312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.251 [2024-12-09 15:03:51.597420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.251 [2024-12-09 15:03:51.597420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 [2024-12-09 15:03:51.747531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 [2024-12-09 15:03:51.777398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.251 15:03:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:50.510 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:50.511 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.511 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.511 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.511 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.511 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.769 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.028 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.287 15:03:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:51.545 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.546 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.804 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.062 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.063 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.321 15:03:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.321 rmmod nvme_tcp 00:10:52.321 rmmod nvme_fabrics 00:10:52.321 rmmod nvme_keyring 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1347294 ']' 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1347294 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1347294 ']' 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1347294 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1347294 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.321 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1347294' 00:10:52.321 killing process with pid 1347294 00:10:52.580 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1347294 00:10:52.580 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1347294 00:10:52.580 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.580 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.581 15:03:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.126 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.127 00:10:55.127 real 0m11.123s 00:10:55.127 user 0m13.244s 00:10:55.127 sys 0m5.230s 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.127 ************************************ 00:10:55.127 END TEST nvmf_referrals 00:10:55.127 ************************************ 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.127 ************************************ 00:10:55.127 START TEST nvmf_connect_disconnect 00:10:55.127 ************************************ 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:55.127 * Looking for test storage... 00:10:55.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.127 --rc genhtml_branch_coverage=1 00:10:55.127 --rc genhtml_function_coverage=1 00:10:55.127 --rc genhtml_legend=1 00:10:55.127 --rc geninfo_all_blocks=1 00:10:55.127 --rc geninfo_unexecuted_blocks=1 00:10:55.127 00:10:55.127 ' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.127 --rc genhtml_branch_coverage=1 00:10:55.127 --rc genhtml_function_coverage=1 00:10:55.127 --rc genhtml_legend=1 00:10:55.127 --rc geninfo_all_blocks=1 00:10:55.127 --rc geninfo_unexecuted_blocks=1 00:10:55.127 00:10:55.127 ' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.127 --rc genhtml_branch_coverage=1 00:10:55.127 --rc genhtml_function_coverage=1 00:10:55.127 --rc genhtml_legend=1 00:10:55.127 --rc geninfo_all_blocks=1 00:10:55.127 --rc geninfo_unexecuted_blocks=1 00:10:55.127 00:10:55.127 ' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.127 --rc genhtml_branch_coverage=1 00:10:55.127 --rc genhtml_function_coverage=1 00:10:55.127 --rc genhtml_legend=1 00:10:55.127 --rc geninfo_all_blocks=1 00:10:55.127 --rc geninfo_unexecuted_blocks=1 00:10:55.127 00:10:55.127 ' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.127 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.128 15:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:00.399 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:00.399 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:00.399 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:00.400 Found net devices under 0000:af:00.0: cvl_0_0 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:00.400 Found net devices under 0000:af:00.1: cvl_0_1 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.400 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:11:00.659 00:11:00.659 --- 10.0.0.2 ping statistics --- 00:11:00.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.659 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:00.659 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:00.659 00:11:00.659 --- 10.0.0.1 ping statistics --- 00:11:00.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.660 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.660 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1351275 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1351275 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1351275 ']' 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.918 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.918 [2024-12-09 15:04:02.514420] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:11:00.918 [2024-12-09 15:04:02.514463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.918 [2024-12-09 15:04:02.593773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.918 [2024-12-09 15:04:02.635097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.918 [2024-12-09 15:04:02.635132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.918 [2024-12-09 15:04:02.635138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.918 [2024-12-09 15:04:02.635144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.918 [2024-12-09 15:04:02.635149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.918 [2024-12-09 15:04:02.636503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.918 [2024-12-09 15:04:02.636615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.918 [2024-12-09 15:04:02.636636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.918 [2024-12-09 15:04:02.636639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 [2024-12-09 15:04:02.785985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.178 [2024-12-09 15:04:02.854714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:01.178 15:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:04.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.632 rmmod nvme_tcp 00:11:17.632 rmmod nvme_fabrics 00:11:17.632 rmmod nvme_keyring 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1351275 ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1351275 ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1351275' 00:11:17.632 killing process with pid 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1351275 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.632 15:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.170 00:11:20.170 real 0m25.025s 00:11:20.170 user 1m8.128s 00:11:20.170 sys 0m5.726s 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:20.170 ************************************ 00:11:20.170 END TEST nvmf_connect_disconnect 00:11:20.170 ************************************ 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.170 ************************************ 00:11:20.170 START TEST nvmf_multitarget 00:11:20.170 ************************************ 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:20.170 * Looking for test storage... 00:11:20.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.170 --rc genhtml_branch_coverage=1 00:11:20.170 --rc genhtml_function_coverage=1 00:11:20.170 --rc genhtml_legend=1 00:11:20.170 --rc geninfo_all_blocks=1 00:11:20.170 --rc geninfo_unexecuted_blocks=1 00:11:20.170 00:11:20.170 ' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.170 --rc genhtml_branch_coverage=1 00:11:20.170 --rc genhtml_function_coverage=1 00:11:20.170 --rc genhtml_legend=1 00:11:20.170 --rc geninfo_all_blocks=1 00:11:20.170 --rc geninfo_unexecuted_blocks=1 00:11:20.170 00:11:20.170 ' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.170 --rc genhtml_branch_coverage=1 00:11:20.170 --rc genhtml_function_coverage=1 00:11:20.170 --rc genhtml_legend=1 00:11:20.170 --rc geninfo_all_blocks=1 00:11:20.170 --rc geninfo_unexecuted_blocks=1 00:11:20.170 00:11:20.170 ' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.170 --rc genhtml_branch_coverage=1 00:11:20.170 --rc genhtml_function_coverage=1 00:11:20.170 --rc genhtml_legend=1 00:11:20.170 --rc geninfo_all_blocks=1 00:11:20.170 --rc geninfo_unexecuted_blocks=1 00:11:20.170 00:11:20.170 ' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.170 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.171 15:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:26.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:26.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:26.742 Found net devices under 0000:af:00.0: cvl_0_0 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:26.742 Found net devices under 0000:af:00.1: cvl_0_1 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.742 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:11:26.743 00:11:26.743 --- 10.0.0.2 ping statistics --- 00:11:26.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.743 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:26.743 00:11:26.743 --- 10.0.0.1 ping statistics --- 00:11:26.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.743 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1357614 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1357614 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1357614 ']' 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 [2024-12-09 15:04:27.694300] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:11:26.743 [2024-12-09 15:04:27.694352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.743 [2024-12-09 15:04:27.773900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.743 [2024-12-09 15:04:27.815139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.743 [2024-12-09 15:04:27.815177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.743 [2024-12-09 15:04:27.815184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.743 [2024-12-09 15:04:27.815190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.743 [2024-12-09 15:04:27.815195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.743 [2024-12-09 15:04:27.816691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.743 [2024-12-09 15:04:27.816804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.743 [2024-12-09 15:04:27.816909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.743 [2024-12-09 15:04:27.816911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.743 15:04:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:26.743 "nvmf_tgt_1" 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:26.743 "nvmf_tgt_2" 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:26.743 true 00:11:26.743 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:27.002 true 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.002 rmmod nvme_tcp 00:11:27.002 rmmod nvme_fabrics 00:11:27.002 rmmod nvme_keyring 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1357614 ']' 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1357614 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1357614 ']' 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1357614 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.002 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1357614 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1357614' 00:11:27.262 killing process with pid 1357614 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1357614 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1357614 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.262 15:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.799 00:11:29.799 real 0m9.521s 00:11:29.799 user 0m7.086s 00:11:29.799 sys 0m4.895s 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.799 ************************************ 00:11:29.799 END TEST nvmf_multitarget 00:11:29.799 ************************************ 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.799 ************************************ 00:11:29.799 START TEST nvmf_rpc 00:11:29.799 ************************************ 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.799 * Looking for test storage... 00:11:29.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:29.799 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.800 --rc genhtml_branch_coverage=1 00:11:29.800 --rc genhtml_function_coverage=1 00:11:29.800 --rc genhtml_legend=1 00:11:29.800 --rc geninfo_all_blocks=1 00:11:29.800 --rc geninfo_unexecuted_blocks=1 00:11:29.800 00:11:29.800 ' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.800 --rc genhtml_branch_coverage=1 00:11:29.800 --rc genhtml_function_coverage=1 00:11:29.800 --rc genhtml_legend=1 00:11:29.800 --rc geninfo_all_blocks=1 00:11:29.800 --rc geninfo_unexecuted_blocks=1 00:11:29.800 00:11:29.800 ' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.800 --rc genhtml_branch_coverage=1 00:11:29.800 --rc genhtml_function_coverage=1 00:11:29.800 --rc genhtml_legend=1 00:11:29.800 --rc geninfo_all_blocks=1 00:11:29.800 --rc geninfo_unexecuted_blocks=1 00:11:29.800 00:11:29.800 ' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.800 --rc genhtml_branch_coverage=1 00:11:29.800 --rc genhtml_function_coverage=1 00:11:29.800 --rc genhtml_legend=1 00:11:29.800 --rc geninfo_all_blocks=1 00:11:29.800 --rc geninfo_unexecuted_blocks=1 00:11:29.800 00:11:29.800 ' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.800 15:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:36.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:36.373 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.373 15:04:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:36.373 Found net devices under 0000:af:00.0: cvl_0_0 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:36.373 Found net devices under 0000:af:00.1: cvl_0_1 00:11:36.373 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:36.374 00:11:36.374 --- 10.0.0.2 ping statistics --- 00:11:36.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.374 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:11:36.374 00:11:36.374 --- 10.0.0.1 ping statistics --- 00:11:36.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.374 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1361339 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1361339 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1361339 ']' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.374 [2024-12-09 15:04:37.333054] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:11:36.374 [2024-12-09 15:04:37.333099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.374 [2024-12-09 15:04:37.410266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.374 [2024-12-09 15:04:37.449355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.374 [2024-12-09 15:04:37.449395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.374 [2024-12-09 15:04:37.449401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.374 [2024-12-09 15:04:37.449408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.374 [2024-12-09 15:04:37.449413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.374 [2024-12-09 15:04:37.450890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.374 [2024-12-09 15:04:37.451000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.374 [2024-12-09 15:04:37.451108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.374 [2024-12-09 15:04:37.451109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:36.374 "tick_rate": 2100000000, 00:11:36.374 "poll_groups": [ 00:11:36.374 { 00:11:36.374 "name": "nvmf_tgt_poll_group_000", 00:11:36.374 "admin_qpairs": 0, 00:11:36.374 "io_qpairs": 0, 00:11:36.374 "current_admin_qpairs": 0, 00:11:36.374 "current_io_qpairs": 0, 00:11:36.374 "pending_bdev_io": 0, 00:11:36.374 "completed_nvme_io": 0, 00:11:36.374 "transports": [] 00:11:36.374 }, 00:11:36.374 { 00:11:36.374 "name": "nvmf_tgt_poll_group_001", 00:11:36.374 "admin_qpairs": 0, 00:11:36.374 "io_qpairs": 0, 00:11:36.374 "current_admin_qpairs": 0, 00:11:36.374 "current_io_qpairs": 0, 00:11:36.374 "pending_bdev_io": 0, 00:11:36.374 "completed_nvme_io": 0, 00:11:36.374 "transports": [] 00:11:36.374 }, 00:11:36.374 { 00:11:36.374 "name": "nvmf_tgt_poll_group_002", 00:11:36.374 "admin_qpairs": 0, 00:11:36.374 "io_qpairs": 0, 00:11:36.374 "current_admin_qpairs": 0, 00:11:36.374 "current_io_qpairs": 0, 00:11:36.374 "pending_bdev_io": 0, 00:11:36.374 "completed_nvme_io": 0, 00:11:36.374 "transports": [] 00:11:36.374 }, 00:11:36.374 { 00:11:36.374 "name": "nvmf_tgt_poll_group_003", 00:11:36.374 "admin_qpairs": 0, 00:11:36.374 "io_qpairs": 0, 00:11:36.374 "current_admin_qpairs": 0, 00:11:36.374 "current_io_qpairs": 0, 00:11:36.374 "pending_bdev_io": 0, 00:11:36.374 "completed_nvme_io": 0, 00:11:36.374 "transports": [] 00:11:36.374 } 00:11:36.374 ] 00:11:36.374 }' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.374 [2024-12-09 15:04:37.693047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.374 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:36.375 "tick_rate": 2100000000, 00:11:36.375 "poll_groups": [ 00:11:36.375 { 00:11:36.375 "name": "nvmf_tgt_poll_group_000", 00:11:36.375 "admin_qpairs": 0, 00:11:36.375 "io_qpairs": 0, 00:11:36.375 "current_admin_qpairs": 0, 00:11:36.375 "current_io_qpairs": 0, 00:11:36.375 "pending_bdev_io": 0, 00:11:36.375 "completed_nvme_io": 0, 00:11:36.375 "transports": [ 00:11:36.375 { 00:11:36.375 "trtype": "TCP" 00:11:36.375 } 00:11:36.375 ] 00:11:36.375 }, 00:11:36.375 { 00:11:36.375 "name": "nvmf_tgt_poll_group_001", 00:11:36.375 "admin_qpairs": 0, 00:11:36.375 "io_qpairs": 0, 00:11:36.375 "current_admin_qpairs": 0, 00:11:36.375 "current_io_qpairs": 0, 00:11:36.375 "pending_bdev_io": 0, 00:11:36.375 "completed_nvme_io": 0, 00:11:36.375 "transports": [ 00:11:36.375 { 00:11:36.375 "trtype": "TCP" 00:11:36.375 } 00:11:36.375 ] 00:11:36.375 }, 00:11:36.375 { 00:11:36.375 "name": "nvmf_tgt_poll_group_002", 00:11:36.375 "admin_qpairs": 0, 00:11:36.375 "io_qpairs": 0, 00:11:36.375 "current_admin_qpairs": 0, 00:11:36.375 "current_io_qpairs": 0, 00:11:36.375 "pending_bdev_io": 0, 00:11:36.375 "completed_nvme_io": 0, 00:11:36.375 "transports": [ 00:11:36.375 { 00:11:36.375 "trtype": "TCP" 00:11:36.375 } 00:11:36.375 ] 00:11:36.375 }, 00:11:36.375 { 00:11:36.375 "name": "nvmf_tgt_poll_group_003", 00:11:36.375 "admin_qpairs": 0, 00:11:36.375 "io_qpairs": 0, 00:11:36.375 "current_admin_qpairs": 0, 00:11:36.375 "current_io_qpairs": 0, 00:11:36.375 "pending_bdev_io": 0, 00:11:36.375 "completed_nvme_io": 0, 00:11:36.375 "transports": [ 00:11:36.375 { 00:11:36.375 "trtype": "TCP" 00:11:36.375 } 00:11:36.375 ] 00:11:36.375 } 00:11:36.375 ] 00:11:36.375 }' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 Malloc1 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 [2024-12-09 15:04:37.876112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.375 [2024-12-09 15:04:37.914768] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:11:36.375 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:36.375 could not add new controller: failed to write to nvme-fabrics device 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.375 15:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.752 15:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.752 15:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.752 15:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.752 15:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.752 15:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:39.655 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.655 [2024-12-09 15:04:41.288550] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:11:39.656 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:39.656 could not add new controller: failed to write to nvme-fabrics device 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.656 15:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.031 15:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.031 15:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.031 15:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.031 15:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.031 15:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.933 [2024-12-09 15:04:44.717590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.933 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.192 15:04:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.128 15:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.128 15:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.128 15:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.128 15:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.128 15:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.659 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 [2024-12-09 15:04:48.011701] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.660 15:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.597 15:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.597 15:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.597 15:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.597 15:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.597 15:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.499 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.499 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.499 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.500 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.500 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.500 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.500 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.758 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.758 [2024-12-09 15:04:51.365569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.759 15:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.694 15:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.694 15:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.694 15:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.694 15:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.695 15:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 [2024-12-09 15:04:54.679084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.227 15:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.256 15:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.256 15:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.256 15:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.256 15:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.256 15:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.159 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.159 [2024-12-09 15:04:57.949229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.418 15:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.792 15:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.792 15:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.792 15:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.792 15:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.792 15:04:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 [2024-12-09 15:05:01.337561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 [2024-12-09 15:05:01.385650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.695 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 [2024-12-09 15:05:01.433779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 [2024-12-09 15:05:01.481960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.696 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 [2024-12-09 15:05:01.530138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:59.955 "tick_rate": 2100000000, 00:11:59.955 "poll_groups": [ 00:11:59.955 { 00:11:59.955 "name": "nvmf_tgt_poll_group_000", 00:11:59.955 "admin_qpairs": 2, 00:11:59.955 "io_qpairs": 168, 00:11:59.955 "current_admin_qpairs": 0, 00:11:59.955 "current_io_qpairs": 0, 00:11:59.955 "pending_bdev_io": 0, 00:11:59.955 "completed_nvme_io": 212, 00:11:59.955 "transports": [ 00:11:59.955 { 00:11:59.955 "trtype": "TCP" 00:11:59.955 } 00:11:59.955 ] 00:11:59.955 }, 00:11:59.955 { 00:11:59.955 "name": "nvmf_tgt_poll_group_001", 00:11:59.955 "admin_qpairs": 2, 00:11:59.955 "io_qpairs": 168, 00:11:59.955 "current_admin_qpairs": 0, 00:11:59.955 "current_io_qpairs": 0, 00:11:59.955 "pending_bdev_io": 0, 00:11:59.955 "completed_nvme_io": 321, 00:11:59.955 "transports": [ 00:11:59.955 { 00:11:59.955 "trtype": "TCP" 00:11:59.955 } 00:11:59.955 ] 00:11:59.955 }, 00:11:59.955 { 00:11:59.955 "name": "nvmf_tgt_poll_group_002", 00:11:59.955 "admin_qpairs": 1, 00:11:59.955 "io_qpairs": 168, 00:11:59.955 "current_admin_qpairs": 0, 00:11:59.955 "current_io_qpairs": 0, 00:11:59.955 "pending_bdev_io": 0, 00:11:59.955 "completed_nvme_io": 237, 00:11:59.955 "transports": [ 00:11:59.955 { 00:11:59.955 "trtype": "TCP" 00:11:59.955 } 00:11:59.955 ] 00:11:59.955 }, 00:11:59.955 { 00:11:59.955 "name": "nvmf_tgt_poll_group_003", 00:11:59.955 "admin_qpairs": 2, 00:11:59.955 "io_qpairs": 168, 00:11:59.955 "current_admin_qpairs": 0, 00:11:59.955 "current_io_qpairs": 0, 00:11:59.955 "pending_bdev_io": 0, 00:11:59.955 "completed_nvme_io": 252, 00:11:59.955 "transports": [ 00:11:59.955 { 00:11:59.955 "trtype": "TCP" 00:11:59.955 } 00:11:59.955 ] 00:11:59.955 } 00:11:59.955 ] 00:11:59.955 }' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.955 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.956 rmmod nvme_tcp 00:11:59.956 rmmod nvme_fabrics 00:11:59.956 rmmod nvme_keyring 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1361339 ']' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1361339 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1361339 ']' 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1361339 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.956 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1361339 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1361339' 00:12:00.214 killing process with pid 1361339 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1361339 00:12:00.214 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1361339 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.215 15:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.750 00:12:02.750 real 0m32.933s 00:12:02.750 user 1m39.431s 00:12:02.750 sys 0m6.496s 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.750 ************************************ 00:12:02.750 END TEST nvmf_rpc 00:12:02.750 ************************************ 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.750 ************************************ 00:12:02.750 START TEST nvmf_invalid 00:12:02.750 ************************************ 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.750 * Looking for test storage... 00:12:02.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:02.750 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.751 --rc genhtml_branch_coverage=1 00:12:02.751 --rc genhtml_function_coverage=1 00:12:02.751 --rc genhtml_legend=1 00:12:02.751 --rc geninfo_all_blocks=1 00:12:02.751 --rc geninfo_unexecuted_blocks=1 00:12:02.751 00:12:02.751 ' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.751 --rc genhtml_branch_coverage=1 00:12:02.751 --rc genhtml_function_coverage=1 00:12:02.751 --rc genhtml_legend=1 00:12:02.751 --rc geninfo_all_blocks=1 00:12:02.751 --rc geninfo_unexecuted_blocks=1 00:12:02.751 00:12:02.751 ' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.751 --rc genhtml_branch_coverage=1 00:12:02.751 --rc genhtml_function_coverage=1 00:12:02.751 --rc genhtml_legend=1 00:12:02.751 --rc geninfo_all_blocks=1 00:12:02.751 --rc geninfo_unexecuted_blocks=1 00:12:02.751 00:12:02.751 ' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.751 --rc genhtml_branch_coverage=1 00:12:02.751 --rc genhtml_function_coverage=1 00:12:02.751 --rc genhtml_legend=1 00:12:02.751 --rc geninfo_all_blocks=1 00:12:02.751 --rc geninfo_unexecuted_blocks=1 00:12:02.751 00:12:02.751 ' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.751 15:05:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:09.321 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.321 15:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:09.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:09.321 Found net devices under 0000:af:00.0: cvl_0_0 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:09.321 Found net devices under 0000:af:00.1: cvl_0_1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.321 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:09.322 00:12:09.322 --- 10.0.0.2 ping statistics --- 00:12:09.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.322 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:12:09.322 00:12:09.322 --- 10.0.0.1 ping statistics --- 00:12:09.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.322 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1369075 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1369075 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1369075 ']' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 [2024-12-09 15:05:10.346080] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:09.322 [2024-12-09 15:05:10.346124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.322 [2024-12-09 15:05:10.407001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.322 [2024-12-09 15:05:10.448985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.322 [2024-12-09 15:05:10.449020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.322 [2024-12-09 15:05:10.449027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.322 [2024-12-09 15:05:10.449033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.322 [2024-12-09 15:05:10.449037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.322 [2024-12-09 15:05:10.452236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.322 [2024-12-09 15:05:10.452274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.322 [2024-12-09 15:05:10.452380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.322 [2024-12-09 15:05:10.452381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7392 00:12:09.322 [2024-12-09 15:05:10.762716] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:09.322 { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode7392", 00:12:09.322 "tgt_name": "foobar", 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "req_id": 1 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 response: 00:12:09.322 { 00:12:09.322 "code": -32603, 00:12:09.322 "message": "Unable to find target foobar" 00:12:09.322 }' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:09.322 { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode7392", 00:12:09.322 "tgt_name": "foobar", 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "req_id": 1 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 response: 00:12:09.322 { 00:12:09.322 "code": -32603, 00:12:09.322 "message": "Unable to find target foobar" 00:12:09.322 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15377 00:12:09.322 [2024-12-09 15:05:10.963373] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15377: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:09.322 { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode15377", 00:12:09.322 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "req_id": 1 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 response: 00:12:09.322 { 00:12:09.322 "code": -32602, 00:12:09.322 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.322 }' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:09.322 { 00:12:09.322 "nqn": "nqn.2016-06.io.spdk:cnode15377", 00:12:09.322 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.322 "method": "nvmf_create_subsystem", 00:12:09.322 "req_id": 1 00:12:09.322 } 00:12:09.322 Got JSON-RPC error response 00:12:09.322 response: 00:12:09.322 { 00:12:09.322 "code": -32602, 00:12:09.322 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.322 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:09.322 15:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7791 00:12:09.581 [2024-12-09 15:05:11.164055] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7791: invalid model number 'SPDK_Controller' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:09.582 { 00:12:09.582 "nqn": "nqn.2016-06.io.spdk:cnode7791", 00:12:09.582 "model_number": "SPDK_Controller\u001f", 00:12:09.582 "method": "nvmf_create_subsystem", 00:12:09.582 "req_id": 1 00:12:09.582 } 00:12:09.582 Got JSON-RPC error response 00:12:09.582 response: 00:12:09.582 { 00:12:09.582 "code": -32602, 00:12:09.582 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.582 }' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:09.582 { 00:12:09.582 "nqn": "nqn.2016-06.io.spdk:cnode7791", 00:12:09.582 "model_number": "SPDK_Controller\u001f", 00:12:09.582 "method": "nvmf_create_subsystem", 00:12:09.582 "req_id": 1 00:12:09.582 } 00:12:09.582 Got JSON-RPC error response 00:12:09.582 response: 00:12:09.582 { 00:12:09.582 "code": -32602, 00:12:09.582 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.582 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.582 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}ikH]6a@h['\''}rH/+4|}E' 00:12:09.583 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '}ikH]6a@h['\''}rH/+4|}E' nqn.2016-06.io.spdk:cnode19320 00:12:09.842 [2024-12-09 15:05:11.521257] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19320: invalid serial number '}ikH]6a@h['}rH/+4|}E' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:09.842 { 00:12:09.842 "nqn": "nqn.2016-06.io.spdk:cnode19320", 00:12:09.842 "serial_number": "}ikH]6a@h['\''}rH/+4|}E\u007f", 00:12:09.842 "method": "nvmf_create_subsystem", 00:12:09.842 "req_id": 1 00:12:09.842 } 00:12:09.842 Got JSON-RPC error response 00:12:09.842 response: 00:12:09.842 { 00:12:09.842 "code": -32602, 00:12:09.842 "message": "Invalid SN }ikH]6a@h['\''}rH/+4|}E\u007f" 00:12:09.842 }' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:09.842 { 00:12:09.842 "nqn": "nqn.2016-06.io.spdk:cnode19320", 00:12:09.842 "serial_number": "}ikH]6a@h['}rH/+4|}E\u007f", 00:12:09.842 "method": "nvmf_create_subsystem", 00:12:09.842 "req_id": 1 00:12:09.842 } 00:12:09.842 Got JSON-RPC error response 00:12:09.842 response: 00:12:09.842 { 00:12:09.842 "code": -32602, 00:12:09.842 "message": "Invalid SN }ikH]6a@h['}rH/+4|}E\u007f" 00:12:09.842 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.842 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.101 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:10.102 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:12:10.103 15:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W$ba:N:+^h_Rr5=UMAH}?+ /dev/null' 00:12:12.431 15:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.339 00:12:14.339 real 0m11.964s 00:12:14.339 user 0m18.505s 00:12:14.339 sys 0m5.362s 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.339 ************************************ 00:12:14.339 END TEST nvmf_invalid 00:12:14.339 ************************************ 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.339 15:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.598 ************************************ 00:12:14.598 START TEST nvmf_connect_stress 00:12:14.598 ************************************ 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.598 * Looking for test storage... 00:12:14.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:14.598 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.599 --rc genhtml_branch_coverage=1 00:12:14.599 --rc genhtml_function_coverage=1 00:12:14.599 --rc genhtml_legend=1 00:12:14.599 --rc geninfo_all_blocks=1 00:12:14.599 --rc geninfo_unexecuted_blocks=1 00:12:14.599 00:12:14.599 ' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.599 --rc genhtml_branch_coverage=1 00:12:14.599 --rc genhtml_function_coverage=1 00:12:14.599 --rc genhtml_legend=1 00:12:14.599 --rc geninfo_all_blocks=1 00:12:14.599 --rc geninfo_unexecuted_blocks=1 00:12:14.599 00:12:14.599 ' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.599 --rc genhtml_branch_coverage=1 00:12:14.599 --rc genhtml_function_coverage=1 00:12:14.599 --rc genhtml_legend=1 00:12:14.599 --rc geninfo_all_blocks=1 00:12:14.599 --rc geninfo_unexecuted_blocks=1 00:12:14.599 00:12:14.599 ' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.599 --rc genhtml_branch_coverage=1 00:12:14.599 --rc genhtml_function_coverage=1 00:12:14.599 --rc genhtml_legend=1 00:12:14.599 --rc geninfo_all_blocks=1 00:12:14.599 --rc geninfo_unexecuted_blocks=1 00:12:14.599 00:12:14.599 ' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.599 15:05:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:21.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:21.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:21.170 Found net devices under 0000:af:00.0: cvl_0_0 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:21.170 Found net devices under 0000:af:00.1: cvl_0_1 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.170 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:12:21.171 00:12:21.171 --- 10.0.0.2 ping statistics --- 00:12:21.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.171 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:21.171 00:12:21.171 --- 10.0.0.1 ping statistics --- 00:12:21.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.171 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1373223 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1373223 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1373223 ']' 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 [2024-12-09 15:05:22.366637] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:21.171 [2024-12-09 15:05:22.366686] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.171 [2024-12-09 15:05:22.447633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.171 [2024-12-09 15:05:22.488200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.171 [2024-12-09 15:05:22.488241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.171 [2024-12-09 15:05:22.488249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.171 [2024-12-09 15:05:22.488255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.171 [2024-12-09 15:05:22.488260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.171 [2024-12-09 15:05:22.489570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.171 [2024-12-09 15:05:22.489680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.171 [2024-12-09 15:05:22.489681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 [2024-12-09 15:05:22.637718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 [2024-12-09 15:05:22.657946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.171 NULL1 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1373450 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.171 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.172 15:05:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.430 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.430 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:21.430 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.430 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.430 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.689 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.689 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:21.689 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.689 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.689 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.947 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.947 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:21.948 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.948 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.948 15:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.515 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.515 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:22.515 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.515 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.515 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.774 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.774 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:22.774 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.774 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.774 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.129 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.129 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:23.129 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.129 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.129 15:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.388 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.388 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:23.388 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.388 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.388 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.646 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.646 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:23.646 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.646 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.646 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.905 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.905 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:23.905 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.905 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.905 15:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.471 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.471 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:24.471 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.471 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.471 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.730 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.730 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:24.730 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.730 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.730 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.989 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.989 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:24.989 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.989 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.989 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.247 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.247 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:25.247 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.247 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.247 15:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.814 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.814 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:25.814 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.814 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.814 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.072 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.072 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:26.072 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.072 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.072 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.330 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.330 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:26.330 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.330 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.330 15:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.589 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.589 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:26.589 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.589 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.589 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.848 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.848 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:26.848 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.848 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.848 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.414 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.414 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:27.414 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.414 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.414 15:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.673 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:27.673 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.673 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.673 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.931 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.931 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:27.931 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.931 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.931 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.189 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.189 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:28.189 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.189 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.189 15:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.755 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.755 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:28.755 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.755 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.755 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.013 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.013 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:29.013 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.013 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.013 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.272 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.272 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:29.272 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.272 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.272 15:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.530 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:29.530 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.530 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.530 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.789 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.789 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:29.789 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.789 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.789 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.356 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:30.356 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.356 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.356 15:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.614 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.614 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:30.614 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.614 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.614 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.872 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.872 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:30.872 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.872 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.872 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.131 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.131 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:31.131 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.131 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.131 15:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.131 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1373450 00:12:31.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1373450) - No such process 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1373450 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.390 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.649 rmmod nvme_tcp 00:12:31.649 rmmod nvme_fabrics 00:12:31.649 rmmod nvme_keyring 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1373223 ']' 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1373223 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1373223 ']' 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1373223 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1373223 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1373223' 00:12:31.649 killing process with pid 1373223 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1373223 00:12:31.649 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1373223 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.908 15:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.814 00:12:33.814 real 0m19.366s 00:12:33.814 user 0m40.373s 00:12:33.814 sys 0m8.740s 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.814 ************************************ 00:12:33.814 END TEST nvmf_connect_stress 00:12:33.814 ************************************ 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.814 ************************************ 00:12:33.814 START TEST nvmf_fused_ordering 00:12:33.814 ************************************ 00:12:33.814 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:34.073 * Looking for test storage... 00:12:34.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.073 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.074 --rc genhtml_branch_coverage=1 00:12:34.074 --rc genhtml_function_coverage=1 00:12:34.074 --rc genhtml_legend=1 00:12:34.074 --rc geninfo_all_blocks=1 00:12:34.074 --rc geninfo_unexecuted_blocks=1 00:12:34.074 00:12:34.074 ' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.074 --rc genhtml_branch_coverage=1 00:12:34.074 --rc genhtml_function_coverage=1 00:12:34.074 --rc genhtml_legend=1 00:12:34.074 --rc geninfo_all_blocks=1 00:12:34.074 --rc geninfo_unexecuted_blocks=1 00:12:34.074 00:12:34.074 ' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.074 --rc genhtml_branch_coverage=1 00:12:34.074 --rc genhtml_function_coverage=1 00:12:34.074 --rc genhtml_legend=1 00:12:34.074 --rc geninfo_all_blocks=1 00:12:34.074 --rc geninfo_unexecuted_blocks=1 00:12:34.074 00:12:34.074 ' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.074 --rc genhtml_branch_coverage=1 00:12:34.074 --rc genhtml_function_coverage=1 00:12:34.074 --rc genhtml_legend=1 00:12:34.074 --rc geninfo_all_blocks=1 00:12:34.074 --rc geninfo_unexecuted_blocks=1 00:12:34.074 00:12:34.074 ' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.074 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.075 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.075 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.075 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.075 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.075 15:05:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:40.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:40.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:40.650 Found net devices under 0000:af:00.0: cvl_0_0 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:40.650 Found net devices under 0000:af:00.1: cvl_0_1 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.650 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:12:40.651 00:12:40.651 --- 10.0.0.2 ping statistics --- 00:12:40.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.651 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:40.651 00:12:40.651 --- 10.0.0.1 ping statistics --- 00:12:40.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.651 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1378567 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1378567 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1378567 ']' 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.651 15:05:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 [2024-12-09 15:05:41.787759] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:40.651 [2024-12-09 15:05:41.787805] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.651 [2024-12-09 15:05:41.864743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.651 [2024-12-09 15:05:41.905879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.651 [2024-12-09 15:05:41.905914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.651 [2024-12-09 15:05:41.905921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.651 [2024-12-09 15:05:41.905927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.651 [2024-12-09 15:05:41.905933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.651 [2024-12-09 15:05:41.906472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 [2024-12-09 15:05:42.046382] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 [2024-12-09 15:05:42.066552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 NULL1 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.651 15:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:40.651 [2024-12-09 15:05:42.123680] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:40.651 [2024-12-09 15:05:42.123711] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378700 ] 00:12:40.911 Attached to nqn.2016-06.io.spdk:cnode1 00:12:40.911 Namespace ID: 1 size: 1GB 00:12:40.911 fused_ordering(0) 00:12:40.911 fused_ordering(1) 00:12:40.911 fused_ordering(2) 00:12:40.911 fused_ordering(3) 00:12:40.911 fused_ordering(4) 00:12:40.911 fused_ordering(5) 00:12:40.911 fused_ordering(6) 00:12:40.911 fused_ordering(7) 00:12:40.911 fused_ordering(8) 00:12:40.911 fused_ordering(9) 00:12:40.911 fused_ordering(10) 00:12:40.911 fused_ordering(11) 00:12:40.911 fused_ordering(12) 00:12:40.911 fused_ordering(13) 00:12:40.911 fused_ordering(14) 00:12:40.911 fused_ordering(15) 00:12:40.911 fused_ordering(16) 00:12:40.911 fused_ordering(17) 00:12:40.911 fused_ordering(18) 00:12:40.911 fused_ordering(19) 00:12:40.911 fused_ordering(20) 00:12:40.911 fused_ordering(21) 00:12:40.911 fused_ordering(22) 00:12:40.911 fused_ordering(23) 00:12:40.911 fused_ordering(24) 00:12:40.911 fused_ordering(25) 00:12:40.911 fused_ordering(26) 00:12:40.911 fused_ordering(27) 00:12:40.911 fused_ordering(28) 00:12:40.911 fused_ordering(29) 00:12:40.911 fused_ordering(30) 00:12:40.911 fused_ordering(31) 00:12:40.911 fused_ordering(32) 00:12:40.911 fused_ordering(33) 00:12:40.911 fused_ordering(34) 00:12:40.911 fused_ordering(35) 00:12:40.911 fused_ordering(36) 00:12:40.911 fused_ordering(37) 00:12:40.911 fused_ordering(38) 00:12:40.911 fused_ordering(39) 00:12:40.911 fused_ordering(40) 00:12:40.911 fused_ordering(41) 00:12:40.911 fused_ordering(42) 00:12:40.911 fused_ordering(43) 00:12:40.911 fused_ordering(44) 00:12:40.911 fused_ordering(45) 00:12:40.911 fused_ordering(46) 00:12:40.911 fused_ordering(47) 00:12:40.911 fused_ordering(48) 00:12:40.911 fused_ordering(49) 00:12:40.911 fused_ordering(50) 00:12:40.911 fused_ordering(51) 00:12:40.911 fused_ordering(52) 00:12:40.911 fused_ordering(53) 00:12:40.911 fused_ordering(54) 00:12:40.911 fused_ordering(55) 00:12:40.911 fused_ordering(56) 00:12:40.911 fused_ordering(57) 00:12:40.911 fused_ordering(58) 00:12:40.911 fused_ordering(59) 00:12:40.911 fused_ordering(60) 00:12:40.911 fused_ordering(61) 00:12:40.911 fused_ordering(62) 00:12:40.911 fused_ordering(63) 00:12:40.911 fused_ordering(64) 00:12:40.911 fused_ordering(65) 00:12:40.911 fused_ordering(66) 00:12:40.911 fused_ordering(67) 00:12:40.911 fused_ordering(68) 00:12:40.911 fused_ordering(69) 00:12:40.911 fused_ordering(70) 00:12:40.911 fused_ordering(71) 00:12:40.911 fused_ordering(72) 00:12:40.911 fused_ordering(73) 00:12:40.911 fused_ordering(74) 00:12:40.911 fused_ordering(75) 00:12:40.911 fused_ordering(76) 00:12:40.911 fused_ordering(77) 00:12:40.911 fused_ordering(78) 00:12:40.911 fused_ordering(79) 00:12:40.911 fused_ordering(80) 00:12:40.911 fused_ordering(81) 00:12:40.911 fused_ordering(82) 00:12:40.911 fused_ordering(83) 00:12:40.911 fused_ordering(84) 00:12:40.911 fused_ordering(85) 00:12:40.911 fused_ordering(86) 00:12:40.911 fused_ordering(87) 00:12:40.911 fused_ordering(88) 00:12:40.911 fused_ordering(89) 00:12:40.911 fused_ordering(90) 00:12:40.911 fused_ordering(91) 00:12:40.911 fused_ordering(92) 00:12:40.911 fused_ordering(93) 00:12:40.911 fused_ordering(94) 00:12:40.911 fused_ordering(95) 00:12:40.911 fused_ordering(96) 00:12:40.911 fused_ordering(97) 00:12:40.911 fused_ordering(98) 00:12:40.911 fused_ordering(99) 00:12:40.911 fused_ordering(100) 00:12:40.911 fused_ordering(101) 00:12:40.911 fused_ordering(102) 00:12:40.911 fused_ordering(103) 00:12:40.911 fused_ordering(104) 00:12:40.911 fused_ordering(105) 00:12:40.911 fused_ordering(106) 00:12:40.911 fused_ordering(107) 00:12:40.911 fused_ordering(108) 00:12:40.911 fused_ordering(109) 00:12:40.911 fused_ordering(110) 00:12:40.911 fused_ordering(111) 00:12:40.911 fused_ordering(112) 00:12:40.911 fused_ordering(113) 00:12:40.911 fused_ordering(114) 00:12:40.911 fused_ordering(115) 00:12:40.911 fused_ordering(116) 00:12:40.911 fused_ordering(117) 00:12:40.911 fused_ordering(118) 00:12:40.911 fused_ordering(119) 00:12:40.911 fused_ordering(120) 00:12:40.911 fused_ordering(121) 00:12:40.911 fused_ordering(122) 00:12:40.911 fused_ordering(123) 00:12:40.911 fused_ordering(124) 00:12:40.911 fused_ordering(125) 00:12:40.911 fused_ordering(126) 00:12:40.911 fused_ordering(127) 00:12:40.911 fused_ordering(128) 00:12:40.911 fused_ordering(129) 00:12:40.911 fused_ordering(130) 00:12:40.911 fused_ordering(131) 00:12:40.911 fused_ordering(132) 00:12:40.911 fused_ordering(133) 00:12:40.911 fused_ordering(134) 00:12:40.911 fused_ordering(135) 00:12:40.911 fused_ordering(136) 00:12:40.911 fused_ordering(137) 00:12:40.911 fused_ordering(138) 00:12:40.911 fused_ordering(139) 00:12:40.911 fused_ordering(140) 00:12:40.911 fused_ordering(141) 00:12:40.911 fused_ordering(142) 00:12:40.911 fused_ordering(143) 00:12:40.911 fused_ordering(144) 00:12:40.911 fused_ordering(145) 00:12:40.911 fused_ordering(146) 00:12:40.911 fused_ordering(147) 00:12:40.911 fused_ordering(148) 00:12:40.911 fused_ordering(149) 00:12:40.911 fused_ordering(150) 00:12:40.911 fused_ordering(151) 00:12:40.911 fused_ordering(152) 00:12:40.911 fused_ordering(153) 00:12:40.911 fused_ordering(154) 00:12:40.911 fused_ordering(155) 00:12:40.911 fused_ordering(156) 00:12:40.911 fused_ordering(157) 00:12:40.911 fused_ordering(158) 00:12:40.911 fused_ordering(159) 00:12:40.911 fused_ordering(160) 00:12:40.911 fused_ordering(161) 00:12:40.911 fused_ordering(162) 00:12:40.911 fused_ordering(163) 00:12:40.911 fused_ordering(164) 00:12:40.911 fused_ordering(165) 00:12:40.911 fused_ordering(166) 00:12:40.911 fused_ordering(167) 00:12:40.911 fused_ordering(168) 00:12:40.911 fused_ordering(169) 00:12:40.911 fused_ordering(170) 00:12:40.911 fused_ordering(171) 00:12:40.911 fused_ordering(172) 00:12:40.911 fused_ordering(173) 00:12:40.911 fused_ordering(174) 00:12:40.911 fused_ordering(175) 00:12:40.911 fused_ordering(176) 00:12:40.911 fused_ordering(177) 00:12:40.911 fused_ordering(178) 00:12:40.911 fused_ordering(179) 00:12:40.911 fused_ordering(180) 00:12:40.911 fused_ordering(181) 00:12:40.911 fused_ordering(182) 00:12:40.911 fused_ordering(183) 00:12:40.911 fused_ordering(184) 00:12:40.911 fused_ordering(185) 00:12:40.911 fused_ordering(186) 00:12:40.911 fused_ordering(187) 00:12:40.911 fused_ordering(188) 00:12:40.911 fused_ordering(189) 00:12:40.911 fused_ordering(190) 00:12:40.911 fused_ordering(191) 00:12:40.911 fused_ordering(192) 00:12:40.911 fused_ordering(193) 00:12:40.911 fused_ordering(194) 00:12:40.911 fused_ordering(195) 00:12:40.911 fused_ordering(196) 00:12:40.911 fused_ordering(197) 00:12:40.911 fused_ordering(198) 00:12:40.911 fused_ordering(199) 00:12:40.911 fused_ordering(200) 00:12:40.911 fused_ordering(201) 00:12:40.911 fused_ordering(202) 00:12:40.911 fused_ordering(203) 00:12:40.911 fused_ordering(204) 00:12:40.911 fused_ordering(205) 00:12:41.171 fused_ordering(206) 00:12:41.171 fused_ordering(207) 00:12:41.171 fused_ordering(208) 00:12:41.171 fused_ordering(209) 00:12:41.171 fused_ordering(210) 00:12:41.171 fused_ordering(211) 00:12:41.171 fused_ordering(212) 00:12:41.171 fused_ordering(213) 00:12:41.171 fused_ordering(214) 00:12:41.171 fused_ordering(215) 00:12:41.171 fused_ordering(216) 00:12:41.171 fused_ordering(217) 00:12:41.171 fused_ordering(218) 00:12:41.171 fused_ordering(219) 00:12:41.171 fused_ordering(220) 00:12:41.171 fused_ordering(221) 00:12:41.171 fused_ordering(222) 00:12:41.171 fused_ordering(223) 00:12:41.171 fused_ordering(224) 00:12:41.171 fused_ordering(225) 00:12:41.171 fused_ordering(226) 00:12:41.171 fused_ordering(227) 00:12:41.171 fused_ordering(228) 00:12:41.171 fused_ordering(229) 00:12:41.171 fused_ordering(230) 00:12:41.171 fused_ordering(231) 00:12:41.171 fused_ordering(232) 00:12:41.171 fused_ordering(233) 00:12:41.171 fused_ordering(234) 00:12:41.171 fused_ordering(235) 00:12:41.171 fused_ordering(236) 00:12:41.171 fused_ordering(237) 00:12:41.171 fused_ordering(238) 00:12:41.171 fused_ordering(239) 00:12:41.171 fused_ordering(240) 00:12:41.171 fused_ordering(241) 00:12:41.171 fused_ordering(242) 00:12:41.171 fused_ordering(243) 00:12:41.171 fused_ordering(244) 00:12:41.171 fused_ordering(245) 00:12:41.171 fused_ordering(246) 00:12:41.171 fused_ordering(247) 00:12:41.171 fused_ordering(248) 00:12:41.171 fused_ordering(249) 00:12:41.171 fused_ordering(250) 00:12:41.171 fused_ordering(251) 00:12:41.171 fused_ordering(252) 00:12:41.171 fused_ordering(253) 00:12:41.171 fused_ordering(254) 00:12:41.171 fused_ordering(255) 00:12:41.171 fused_ordering(256) 00:12:41.171 fused_ordering(257) 00:12:41.171 fused_ordering(258) 00:12:41.171 fused_ordering(259) 00:12:41.171 fused_ordering(260) 00:12:41.171 fused_ordering(261) 00:12:41.171 fused_ordering(262) 00:12:41.171 fused_ordering(263) 00:12:41.171 fused_ordering(264) 00:12:41.171 fused_ordering(265) 00:12:41.171 fused_ordering(266) 00:12:41.171 fused_ordering(267) 00:12:41.171 fused_ordering(268) 00:12:41.171 fused_ordering(269) 00:12:41.171 fused_ordering(270) 00:12:41.171 fused_ordering(271) 00:12:41.171 fused_ordering(272) 00:12:41.171 fused_ordering(273) 00:12:41.171 fused_ordering(274) 00:12:41.171 fused_ordering(275) 00:12:41.171 fused_ordering(276) 00:12:41.171 fused_ordering(277) 00:12:41.171 fused_ordering(278) 00:12:41.171 fused_ordering(279) 00:12:41.171 fused_ordering(280) 00:12:41.171 fused_ordering(281) 00:12:41.171 fused_ordering(282) 00:12:41.171 fused_ordering(283) 00:12:41.171 fused_ordering(284) 00:12:41.171 fused_ordering(285) 00:12:41.171 fused_ordering(286) 00:12:41.171 fused_ordering(287) 00:12:41.171 fused_ordering(288) 00:12:41.171 fused_ordering(289) 00:12:41.171 fused_ordering(290) 00:12:41.171 fused_ordering(291) 00:12:41.171 fused_ordering(292) 00:12:41.171 fused_ordering(293) 00:12:41.171 fused_ordering(294) 00:12:41.171 fused_ordering(295) 00:12:41.171 fused_ordering(296) 00:12:41.171 fused_ordering(297) 00:12:41.171 fused_ordering(298) 00:12:41.171 fused_ordering(299) 00:12:41.171 fused_ordering(300) 00:12:41.171 fused_ordering(301) 00:12:41.171 fused_ordering(302) 00:12:41.171 fused_ordering(303) 00:12:41.171 fused_ordering(304) 00:12:41.171 fused_ordering(305) 00:12:41.171 fused_ordering(306) 00:12:41.171 fused_ordering(307) 00:12:41.171 fused_ordering(308) 00:12:41.171 fused_ordering(309) 00:12:41.171 fused_ordering(310) 00:12:41.171 fused_ordering(311) 00:12:41.171 fused_ordering(312) 00:12:41.171 fused_ordering(313) 00:12:41.171 fused_ordering(314) 00:12:41.171 fused_ordering(315) 00:12:41.171 fused_ordering(316) 00:12:41.171 fused_ordering(317) 00:12:41.171 fused_ordering(318) 00:12:41.171 fused_ordering(319) 00:12:41.171 fused_ordering(320) 00:12:41.171 fused_ordering(321) 00:12:41.171 fused_ordering(322) 00:12:41.171 fused_ordering(323) 00:12:41.171 fused_ordering(324) 00:12:41.171 fused_ordering(325) 00:12:41.171 fused_ordering(326) 00:12:41.171 fused_ordering(327) 00:12:41.171 fused_ordering(328) 00:12:41.171 fused_ordering(329) 00:12:41.171 fused_ordering(330) 00:12:41.171 fused_ordering(331) 00:12:41.171 fused_ordering(332) 00:12:41.171 fused_ordering(333) 00:12:41.171 fused_ordering(334) 00:12:41.171 fused_ordering(335) 00:12:41.171 fused_ordering(336) 00:12:41.171 fused_ordering(337) 00:12:41.171 fused_ordering(338) 00:12:41.171 fused_ordering(339) 00:12:41.171 fused_ordering(340) 00:12:41.171 fused_ordering(341) 00:12:41.171 fused_ordering(342) 00:12:41.171 fused_ordering(343) 00:12:41.171 fused_ordering(344) 00:12:41.171 fused_ordering(345) 00:12:41.171 fused_ordering(346) 00:12:41.171 fused_ordering(347) 00:12:41.171 fused_ordering(348) 00:12:41.171 fused_ordering(349) 00:12:41.171 fused_ordering(350) 00:12:41.171 fused_ordering(351) 00:12:41.171 fused_ordering(352) 00:12:41.171 fused_ordering(353) 00:12:41.171 fused_ordering(354) 00:12:41.171 fused_ordering(355) 00:12:41.171 fused_ordering(356) 00:12:41.171 fused_ordering(357) 00:12:41.171 fused_ordering(358) 00:12:41.171 fused_ordering(359) 00:12:41.171 fused_ordering(360) 00:12:41.171 fused_ordering(361) 00:12:41.171 fused_ordering(362) 00:12:41.171 fused_ordering(363) 00:12:41.171 fused_ordering(364) 00:12:41.171 fused_ordering(365) 00:12:41.171 fused_ordering(366) 00:12:41.171 fused_ordering(367) 00:12:41.171 fused_ordering(368) 00:12:41.171 fused_ordering(369) 00:12:41.171 fused_ordering(370) 00:12:41.171 fused_ordering(371) 00:12:41.171 fused_ordering(372) 00:12:41.171 fused_ordering(373) 00:12:41.171 fused_ordering(374) 00:12:41.171 fused_ordering(375) 00:12:41.171 fused_ordering(376) 00:12:41.171 fused_ordering(377) 00:12:41.171 fused_ordering(378) 00:12:41.171 fused_ordering(379) 00:12:41.171 fused_ordering(380) 00:12:41.171 fused_ordering(381) 00:12:41.171 fused_ordering(382) 00:12:41.171 fused_ordering(383) 00:12:41.171 fused_ordering(384) 00:12:41.171 fused_ordering(385) 00:12:41.171 fused_ordering(386) 00:12:41.171 fused_ordering(387) 00:12:41.171 fused_ordering(388) 00:12:41.171 fused_ordering(389) 00:12:41.171 fused_ordering(390) 00:12:41.171 fused_ordering(391) 00:12:41.171 fused_ordering(392) 00:12:41.171 fused_ordering(393) 00:12:41.171 fused_ordering(394) 00:12:41.171 fused_ordering(395) 00:12:41.171 fused_ordering(396) 00:12:41.171 fused_ordering(397) 00:12:41.171 fused_ordering(398) 00:12:41.171 fused_ordering(399) 00:12:41.171 fused_ordering(400) 00:12:41.171 fused_ordering(401) 00:12:41.171 fused_ordering(402) 00:12:41.171 fused_ordering(403) 00:12:41.171 fused_ordering(404) 00:12:41.171 fused_ordering(405) 00:12:41.171 fused_ordering(406) 00:12:41.171 fused_ordering(407) 00:12:41.171 fused_ordering(408) 00:12:41.171 fused_ordering(409) 00:12:41.171 fused_ordering(410) 00:12:41.431 fused_ordering(411) 00:12:41.431 fused_ordering(412) 00:12:41.431 fused_ordering(413) 00:12:41.431 fused_ordering(414) 00:12:41.431 fused_ordering(415) 00:12:41.431 fused_ordering(416) 00:12:41.431 fused_ordering(417) 00:12:41.431 fused_ordering(418) 00:12:41.431 fused_ordering(419) 00:12:41.431 fused_ordering(420) 00:12:41.431 fused_ordering(421) 00:12:41.431 fused_ordering(422) 00:12:41.431 fused_ordering(423) 00:12:41.431 fused_ordering(424) 00:12:41.431 fused_ordering(425) 00:12:41.431 fused_ordering(426) 00:12:41.431 fused_ordering(427) 00:12:41.431 fused_ordering(428) 00:12:41.431 fused_ordering(429) 00:12:41.431 fused_ordering(430) 00:12:41.431 fused_ordering(431) 00:12:41.431 fused_ordering(432) 00:12:41.431 fused_ordering(433) 00:12:41.431 fused_ordering(434) 00:12:41.431 fused_ordering(435) 00:12:41.431 fused_ordering(436) 00:12:41.431 fused_ordering(437) 00:12:41.431 fused_ordering(438) 00:12:41.431 fused_ordering(439) 00:12:41.431 fused_ordering(440) 00:12:41.431 fused_ordering(441) 00:12:41.431 fused_ordering(442) 00:12:41.431 fused_ordering(443) 00:12:41.431 fused_ordering(444) 00:12:41.431 fused_ordering(445) 00:12:41.431 fused_ordering(446) 00:12:41.431 fused_ordering(447) 00:12:41.431 fused_ordering(448) 00:12:41.431 fused_ordering(449) 00:12:41.431 fused_ordering(450) 00:12:41.431 fused_ordering(451) 00:12:41.431 fused_ordering(452) 00:12:41.431 fused_ordering(453) 00:12:41.431 fused_ordering(454) 00:12:41.431 fused_ordering(455) 00:12:41.431 fused_ordering(456) 00:12:41.431 fused_ordering(457) 00:12:41.431 fused_ordering(458) 00:12:41.431 fused_ordering(459) 00:12:41.431 fused_ordering(460) 00:12:41.431 fused_ordering(461) 00:12:41.431 fused_ordering(462) 00:12:41.431 fused_ordering(463) 00:12:41.431 fused_ordering(464) 00:12:41.431 fused_ordering(465) 00:12:41.431 fused_ordering(466) 00:12:41.431 fused_ordering(467) 00:12:41.431 fused_ordering(468) 00:12:41.431 fused_ordering(469) 00:12:41.431 fused_ordering(470) 00:12:41.431 fused_ordering(471) 00:12:41.431 fused_ordering(472) 00:12:41.431 fused_ordering(473) 00:12:41.431 fused_ordering(474) 00:12:41.431 fused_ordering(475) 00:12:41.431 fused_ordering(476) 00:12:41.431 fused_ordering(477) 00:12:41.431 fused_ordering(478) 00:12:41.431 fused_ordering(479) 00:12:41.431 fused_ordering(480) 00:12:41.431 fused_ordering(481) 00:12:41.431 fused_ordering(482) 00:12:41.431 fused_ordering(483) 00:12:41.431 fused_ordering(484) 00:12:41.431 fused_ordering(485) 00:12:41.431 fused_ordering(486) 00:12:41.431 fused_ordering(487) 00:12:41.431 fused_ordering(488) 00:12:41.431 fused_ordering(489) 00:12:41.431 fused_ordering(490) 00:12:41.431 fused_ordering(491) 00:12:41.431 fused_ordering(492) 00:12:41.431 fused_ordering(493) 00:12:41.431 fused_ordering(494) 00:12:41.431 fused_ordering(495) 00:12:41.431 fused_ordering(496) 00:12:41.431 fused_ordering(497) 00:12:41.431 fused_ordering(498) 00:12:41.431 fused_ordering(499) 00:12:41.431 fused_ordering(500) 00:12:41.431 fused_ordering(501) 00:12:41.431 fused_ordering(502) 00:12:41.431 fused_ordering(503) 00:12:41.431 fused_ordering(504) 00:12:41.431 fused_ordering(505) 00:12:41.431 fused_ordering(506) 00:12:41.431 fused_ordering(507) 00:12:41.431 fused_ordering(508) 00:12:41.431 fused_ordering(509) 00:12:41.431 fused_ordering(510) 00:12:41.431 fused_ordering(511) 00:12:41.431 fused_ordering(512) 00:12:41.431 fused_ordering(513) 00:12:41.431 fused_ordering(514) 00:12:41.431 fused_ordering(515) 00:12:41.431 fused_ordering(516) 00:12:41.431 fused_ordering(517) 00:12:41.431 fused_ordering(518) 00:12:41.431 fused_ordering(519) 00:12:41.431 fused_ordering(520) 00:12:41.431 fused_ordering(521) 00:12:41.431 fused_ordering(522) 00:12:41.431 fused_ordering(523) 00:12:41.431 fused_ordering(524) 00:12:41.431 fused_ordering(525) 00:12:41.431 fused_ordering(526) 00:12:41.431 fused_ordering(527) 00:12:41.431 fused_ordering(528) 00:12:41.431 fused_ordering(529) 00:12:41.431 fused_ordering(530) 00:12:41.431 fused_ordering(531) 00:12:41.431 fused_ordering(532) 00:12:41.431 fused_ordering(533) 00:12:41.431 fused_ordering(534) 00:12:41.431 fused_ordering(535) 00:12:41.431 fused_ordering(536) 00:12:41.431 fused_ordering(537) 00:12:41.431 fused_ordering(538) 00:12:41.431 fused_ordering(539) 00:12:41.431 fused_ordering(540) 00:12:41.431 fused_ordering(541) 00:12:41.431 fused_ordering(542) 00:12:41.431 fused_ordering(543) 00:12:41.431 fused_ordering(544) 00:12:41.431 fused_ordering(545) 00:12:41.431 fused_ordering(546) 00:12:41.431 fused_ordering(547) 00:12:41.431 fused_ordering(548) 00:12:41.431 fused_ordering(549) 00:12:41.431 fused_ordering(550) 00:12:41.431 fused_ordering(551) 00:12:41.431 fused_ordering(552) 00:12:41.431 fused_ordering(553) 00:12:41.431 fused_ordering(554) 00:12:41.431 fused_ordering(555) 00:12:41.431 fused_ordering(556) 00:12:41.431 fused_ordering(557) 00:12:41.431 fused_ordering(558) 00:12:41.431 fused_ordering(559) 00:12:41.431 fused_ordering(560) 00:12:41.431 fused_ordering(561) 00:12:41.431 fused_ordering(562) 00:12:41.431 fused_ordering(563) 00:12:41.431 fused_ordering(564) 00:12:41.431 fused_ordering(565) 00:12:41.431 fused_ordering(566) 00:12:41.431 fused_ordering(567) 00:12:41.431 fused_ordering(568) 00:12:41.431 fused_ordering(569) 00:12:41.431 fused_ordering(570) 00:12:41.431 fused_ordering(571) 00:12:41.431 fused_ordering(572) 00:12:41.431 fused_ordering(573) 00:12:41.431 fused_ordering(574) 00:12:41.431 fused_ordering(575) 00:12:41.431 fused_ordering(576) 00:12:41.431 fused_ordering(577) 00:12:41.431 fused_ordering(578) 00:12:41.431 fused_ordering(579) 00:12:41.431 fused_ordering(580) 00:12:41.431 fused_ordering(581) 00:12:41.431 fused_ordering(582) 00:12:41.431 fused_ordering(583) 00:12:41.431 fused_ordering(584) 00:12:41.431 fused_ordering(585) 00:12:41.431 fused_ordering(586) 00:12:41.431 fused_ordering(587) 00:12:41.431 fused_ordering(588) 00:12:41.431 fused_ordering(589) 00:12:41.431 fused_ordering(590) 00:12:41.431 fused_ordering(591) 00:12:41.431 fused_ordering(592) 00:12:41.431 fused_ordering(593) 00:12:41.431 fused_ordering(594) 00:12:41.431 fused_ordering(595) 00:12:41.431 fused_ordering(596) 00:12:41.431 fused_ordering(597) 00:12:41.431 fused_ordering(598) 00:12:41.431 fused_ordering(599) 00:12:41.431 fused_ordering(600) 00:12:41.431 fused_ordering(601) 00:12:41.431 fused_ordering(602) 00:12:41.431 fused_ordering(603) 00:12:41.431 fused_ordering(604) 00:12:41.431 fused_ordering(605) 00:12:41.431 fused_ordering(606) 00:12:41.431 fused_ordering(607) 00:12:41.431 fused_ordering(608) 00:12:41.431 fused_ordering(609) 00:12:41.431 fused_ordering(610) 00:12:41.431 fused_ordering(611) 00:12:41.431 fused_ordering(612) 00:12:41.431 fused_ordering(613) 00:12:41.431 fused_ordering(614) 00:12:41.431 fused_ordering(615) 00:12:41.690 fused_ordering(616) 00:12:41.690 fused_ordering(617) 00:12:41.690 fused_ordering(618) 00:12:41.690 fused_ordering(619) 00:12:41.690 fused_ordering(620) 00:12:41.690 fused_ordering(621) 00:12:41.690 fused_ordering(622) 00:12:41.690 fused_ordering(623) 00:12:41.690 fused_ordering(624) 00:12:41.690 fused_ordering(625) 00:12:41.690 fused_ordering(626) 00:12:41.690 fused_ordering(627) 00:12:41.690 fused_ordering(628) 00:12:41.690 fused_ordering(629) 00:12:41.690 fused_ordering(630) 00:12:41.690 fused_ordering(631) 00:12:41.690 fused_ordering(632) 00:12:41.690 fused_ordering(633) 00:12:41.690 fused_ordering(634) 00:12:41.690 fused_ordering(635) 00:12:41.690 fused_ordering(636) 00:12:41.690 fused_ordering(637) 00:12:41.690 fused_ordering(638) 00:12:41.690 fused_ordering(639) 00:12:41.690 fused_ordering(640) 00:12:41.690 fused_ordering(641) 00:12:41.690 fused_ordering(642) 00:12:41.690 fused_ordering(643) 00:12:41.690 fused_ordering(644) 00:12:41.690 fused_ordering(645) 00:12:41.690 fused_ordering(646) 00:12:41.690 fused_ordering(647) 00:12:41.691 fused_ordering(648) 00:12:41.691 fused_ordering(649) 00:12:41.691 fused_ordering(650) 00:12:41.691 fused_ordering(651) 00:12:41.691 fused_ordering(652) 00:12:41.691 fused_ordering(653) 00:12:41.691 fused_ordering(654) 00:12:41.691 fused_ordering(655) 00:12:41.691 fused_ordering(656) 00:12:41.691 fused_ordering(657) 00:12:41.691 fused_ordering(658) 00:12:41.691 fused_ordering(659) 00:12:41.691 fused_ordering(660) 00:12:41.691 fused_ordering(661) 00:12:41.691 fused_ordering(662) 00:12:41.691 fused_ordering(663) 00:12:41.691 fused_ordering(664) 00:12:41.691 fused_ordering(665) 00:12:41.691 fused_ordering(666) 00:12:41.691 fused_ordering(667) 00:12:41.691 fused_ordering(668) 00:12:41.691 fused_ordering(669) 00:12:41.691 fused_ordering(670) 00:12:41.691 fused_ordering(671) 00:12:41.691 fused_ordering(672) 00:12:41.691 fused_ordering(673) 00:12:41.691 fused_ordering(674) 00:12:41.691 fused_ordering(675) 00:12:41.691 fused_ordering(676) 00:12:41.691 fused_ordering(677) 00:12:41.691 fused_ordering(678) 00:12:41.691 fused_ordering(679) 00:12:41.691 fused_ordering(680) 00:12:41.691 fused_ordering(681) 00:12:41.691 fused_ordering(682) 00:12:41.691 fused_ordering(683) 00:12:41.691 fused_ordering(684) 00:12:41.691 fused_ordering(685) 00:12:41.691 fused_ordering(686) 00:12:41.691 fused_ordering(687) 00:12:41.691 fused_ordering(688) 00:12:41.691 fused_ordering(689) 00:12:41.691 fused_ordering(690) 00:12:41.691 fused_ordering(691) 00:12:41.691 fused_ordering(692) 00:12:41.691 fused_ordering(693) 00:12:41.691 fused_ordering(694) 00:12:41.691 fused_ordering(695) 00:12:41.691 fused_ordering(696) 00:12:41.691 fused_ordering(697) 00:12:41.691 fused_ordering(698) 00:12:41.691 fused_ordering(699) 00:12:41.691 fused_ordering(700) 00:12:41.691 fused_ordering(701) 00:12:41.691 fused_ordering(702) 00:12:41.691 fused_ordering(703) 00:12:41.691 fused_ordering(704) 00:12:41.691 fused_ordering(705) 00:12:41.691 fused_ordering(706) 00:12:41.691 fused_ordering(707) 00:12:41.691 fused_ordering(708) 00:12:41.691 fused_ordering(709) 00:12:41.691 fused_ordering(710) 00:12:41.691 fused_ordering(711) 00:12:41.691 fused_ordering(712) 00:12:41.691 fused_ordering(713) 00:12:41.691 fused_ordering(714) 00:12:41.691 fused_ordering(715) 00:12:41.691 fused_ordering(716) 00:12:41.691 fused_ordering(717) 00:12:41.691 fused_ordering(718) 00:12:41.691 fused_ordering(719) 00:12:41.691 fused_ordering(720) 00:12:41.691 fused_ordering(721) 00:12:41.691 fused_ordering(722) 00:12:41.691 fused_ordering(723) 00:12:41.691 fused_ordering(724) 00:12:41.691 fused_ordering(725) 00:12:41.691 fused_ordering(726) 00:12:41.691 fused_ordering(727) 00:12:41.691 fused_ordering(728) 00:12:41.691 fused_ordering(729) 00:12:41.691 fused_ordering(730) 00:12:41.691 fused_ordering(731) 00:12:41.691 fused_ordering(732) 00:12:41.691 fused_ordering(733) 00:12:41.691 fused_ordering(734) 00:12:41.691 fused_ordering(735) 00:12:41.691 fused_ordering(736) 00:12:41.691 fused_ordering(737) 00:12:41.691 fused_ordering(738) 00:12:41.691 fused_ordering(739) 00:12:41.691 fused_ordering(740) 00:12:41.691 fused_ordering(741) 00:12:41.691 fused_ordering(742) 00:12:41.691 fused_ordering(743) 00:12:41.691 fused_ordering(744) 00:12:41.691 fused_ordering(745) 00:12:41.691 fused_ordering(746) 00:12:41.691 fused_ordering(747) 00:12:41.691 fused_ordering(748) 00:12:41.691 fused_ordering(749) 00:12:41.691 fused_ordering(750) 00:12:41.691 fused_ordering(751) 00:12:41.691 fused_ordering(752) 00:12:41.691 fused_ordering(753) 00:12:41.691 fused_ordering(754) 00:12:41.691 fused_ordering(755) 00:12:41.691 fused_ordering(756) 00:12:41.691 fused_ordering(757) 00:12:41.691 fused_ordering(758) 00:12:41.691 fused_ordering(759) 00:12:41.691 fused_ordering(760) 00:12:41.691 fused_ordering(761) 00:12:41.691 fused_ordering(762) 00:12:41.691 fused_ordering(763) 00:12:41.691 fused_ordering(764) 00:12:41.691 fused_ordering(765) 00:12:41.691 fused_ordering(766) 00:12:41.691 fused_ordering(767) 00:12:41.691 fused_ordering(768) 00:12:41.691 fused_ordering(769) 00:12:41.691 fused_ordering(770) 00:12:41.691 fused_ordering(771) 00:12:41.691 fused_ordering(772) 00:12:41.691 fused_ordering(773) 00:12:41.691 fused_ordering(774) 00:12:41.691 fused_ordering(775) 00:12:41.691 fused_ordering(776) 00:12:41.691 fused_ordering(777) 00:12:41.691 fused_ordering(778) 00:12:41.691 fused_ordering(779) 00:12:41.691 fused_ordering(780) 00:12:41.691 fused_ordering(781) 00:12:41.691 fused_ordering(782) 00:12:41.691 fused_ordering(783) 00:12:41.691 fused_ordering(784) 00:12:41.691 fused_ordering(785) 00:12:41.691 fused_ordering(786) 00:12:41.691 fused_ordering(787) 00:12:41.691 fused_ordering(788) 00:12:41.691 fused_ordering(789) 00:12:41.691 fused_ordering(790) 00:12:41.691 fused_ordering(791) 00:12:41.691 fused_ordering(792) 00:12:41.691 fused_ordering(793) 00:12:41.691 fused_ordering(794) 00:12:41.691 fused_ordering(795) 00:12:41.691 fused_ordering(796) 00:12:41.691 fused_ordering(797) 00:12:41.691 fused_ordering(798) 00:12:41.691 fused_ordering(799) 00:12:41.691 fused_ordering(800) 00:12:41.691 fused_ordering(801) 00:12:41.691 fused_ordering(802) 00:12:41.691 fused_ordering(803) 00:12:41.691 fused_ordering(804) 00:12:41.691 fused_ordering(805) 00:12:41.691 fused_ordering(806) 00:12:41.691 fused_ordering(807) 00:12:41.691 fused_ordering(808) 00:12:41.691 fused_ordering(809) 00:12:41.691 fused_ordering(810) 00:12:41.691 fused_ordering(811) 00:12:41.691 fused_ordering(812) 00:12:41.691 fused_ordering(813) 00:12:41.691 fused_ordering(814) 00:12:41.691 fused_ordering(815) 00:12:41.691 fused_ordering(816) 00:12:41.691 fused_ordering(817) 00:12:41.691 fused_ordering(818) 00:12:41.691 fused_ordering(819) 00:12:41.691 fused_ordering(820) 00:12:42.260 fused_o[2024-12-09 15:05:43.888910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1943640 is same with the state(6) to be set 00:12:42.260 rdering(821) 00:12:42.260 fused_ordering(822) 00:12:42.260 fused_ordering(823) 00:12:42.260 fused_ordering(824) 00:12:42.260 fused_ordering(825) 00:12:42.260 fused_ordering(826) 00:12:42.260 fused_ordering(827) 00:12:42.260 fused_ordering(828) 00:12:42.260 fused_ordering(829) 00:12:42.260 fused_ordering(830) 00:12:42.260 fused_ordering(831) 00:12:42.260 fused_ordering(832) 00:12:42.260 fused_ordering(833) 00:12:42.260 fused_ordering(834) 00:12:42.260 fused_ordering(835) 00:12:42.260 fused_ordering(836) 00:12:42.260 fused_ordering(837) 00:12:42.260 fused_ordering(838) 00:12:42.260 fused_ordering(839) 00:12:42.260 fused_ordering(840) 00:12:42.260 fused_ordering(841) 00:12:42.260 fused_ordering(842) 00:12:42.260 fused_ordering(843) 00:12:42.260 fused_ordering(844) 00:12:42.260 fused_ordering(845) 00:12:42.260 fused_ordering(846) 00:12:42.260 fused_ordering(847) 00:12:42.260 fused_ordering(848) 00:12:42.260 fused_ordering(849) 00:12:42.260 fused_ordering(850) 00:12:42.260 fused_ordering(851) 00:12:42.260 fused_ordering(852) 00:12:42.260 fused_ordering(853) 00:12:42.260 fused_ordering(854) 00:12:42.260 fused_ordering(855) 00:12:42.260 fused_ordering(856) 00:12:42.260 fused_ordering(857) 00:12:42.260 fused_ordering(858) 00:12:42.260 fused_ordering(859) 00:12:42.260 fused_ordering(860) 00:12:42.260 fused_ordering(861) 00:12:42.260 fused_ordering(862) 00:12:42.260 fused_ordering(863) 00:12:42.260 fused_ordering(864) 00:12:42.260 fused_ordering(865) 00:12:42.260 fused_ordering(866) 00:12:42.260 fused_ordering(867) 00:12:42.260 fused_ordering(868) 00:12:42.260 fused_ordering(869) 00:12:42.260 fused_ordering(870) 00:12:42.260 fused_ordering(871) 00:12:42.260 fused_ordering(872) 00:12:42.260 fused_ordering(873) 00:12:42.260 fused_ordering(874) 00:12:42.260 fused_ordering(875) 00:12:42.260 fused_ordering(876) 00:12:42.260 fused_ordering(877) 00:12:42.260 fused_ordering(878) 00:12:42.260 fused_ordering(879) 00:12:42.260 fused_ordering(880) 00:12:42.260 fused_ordering(881) 00:12:42.260 fused_ordering(882) 00:12:42.260 fused_ordering(883) 00:12:42.260 fused_ordering(884) 00:12:42.260 fused_ordering(885) 00:12:42.260 fused_ordering(886) 00:12:42.260 fused_ordering(887) 00:12:42.260 fused_ordering(888) 00:12:42.260 fused_ordering(889) 00:12:42.260 fused_ordering(890) 00:12:42.260 fused_ordering(891) 00:12:42.260 fused_ordering(892) 00:12:42.260 fused_ordering(893) 00:12:42.260 fused_ordering(894) 00:12:42.260 fused_ordering(895) 00:12:42.260 fused_ordering(896) 00:12:42.260 fused_ordering(897) 00:12:42.260 fused_ordering(898) 00:12:42.260 fused_ordering(899) 00:12:42.260 fused_ordering(900) 00:12:42.260 fused_ordering(901) 00:12:42.260 fused_ordering(902) 00:12:42.260 fused_ordering(903) 00:12:42.260 fused_ordering(904) 00:12:42.260 fused_ordering(905) 00:12:42.260 fused_ordering(906) 00:12:42.260 fused_ordering(907) 00:12:42.260 fused_ordering(908) 00:12:42.260 fused_ordering(909) 00:12:42.260 fused_ordering(910) 00:12:42.260 fused_ordering(911) 00:12:42.260 fused_ordering(912) 00:12:42.260 fused_ordering(913) 00:12:42.260 fused_ordering(914) 00:12:42.260 fused_ordering(915) 00:12:42.260 fused_ordering(916) 00:12:42.260 fused_ordering(917) 00:12:42.260 fused_ordering(918) 00:12:42.260 fused_ordering(919) 00:12:42.260 fused_ordering(920) 00:12:42.260 fused_ordering(921) 00:12:42.260 fused_ordering(922) 00:12:42.260 fused_ordering(923) 00:12:42.260 fused_ordering(924) 00:12:42.260 fused_ordering(925) 00:12:42.260 fused_ordering(926) 00:12:42.260 fused_ordering(927) 00:12:42.260 fused_ordering(928) 00:12:42.260 fused_ordering(929) 00:12:42.260 fused_ordering(930) 00:12:42.260 fused_ordering(931) 00:12:42.260 fused_ordering(932) 00:12:42.260 fused_ordering(933) 00:12:42.260 fused_ordering(934) 00:12:42.260 fused_ordering(935) 00:12:42.260 fused_ordering(936) 00:12:42.260 fused_ordering(937) 00:12:42.260 fused_ordering(938) 00:12:42.260 fused_ordering(939) 00:12:42.260 fused_ordering(940) 00:12:42.260 fused_ordering(941) 00:12:42.260 fused_ordering(942) 00:12:42.260 fused_ordering(943) 00:12:42.260 fused_ordering(944) 00:12:42.260 fused_ordering(945) 00:12:42.260 fused_ordering(946) 00:12:42.260 fused_ordering(947) 00:12:42.260 fused_ordering(948) 00:12:42.260 fused_ordering(949) 00:12:42.260 fused_ordering(950) 00:12:42.260 fused_ordering(951) 00:12:42.260 fused_ordering(952) 00:12:42.260 fused_ordering(953) 00:12:42.260 fused_ordering(954) 00:12:42.260 fused_ordering(955) 00:12:42.260 fused_ordering(956) 00:12:42.260 fused_ordering(957) 00:12:42.260 fused_ordering(958) 00:12:42.260 fused_ordering(959) 00:12:42.260 fused_ordering(960) 00:12:42.260 fused_ordering(961) 00:12:42.260 fused_ordering(962) 00:12:42.260 fused_ordering(963) 00:12:42.260 fused_ordering(964) 00:12:42.260 fused_ordering(965) 00:12:42.260 fused_ordering(966) 00:12:42.260 fused_ordering(967) 00:12:42.260 fused_ordering(968) 00:12:42.260 fused_ordering(969) 00:12:42.260 fused_ordering(970) 00:12:42.260 fused_ordering(971) 00:12:42.260 fused_ordering(972) 00:12:42.260 fused_ordering(973) 00:12:42.260 fused_ordering(974) 00:12:42.260 fused_ordering(975) 00:12:42.260 fused_ordering(976) 00:12:42.260 fused_ordering(977) 00:12:42.260 fused_ordering(978) 00:12:42.260 fused_ordering(979) 00:12:42.260 fused_ordering(980) 00:12:42.260 fused_ordering(981) 00:12:42.260 fused_ordering(982) 00:12:42.260 fused_ordering(983) 00:12:42.260 fused_ordering(984) 00:12:42.260 fused_ordering(985) 00:12:42.260 fused_ordering(986) 00:12:42.260 fused_ordering(987) 00:12:42.260 fused_ordering(988) 00:12:42.260 fused_ordering(989) 00:12:42.260 fused_ordering(990) 00:12:42.260 fused_ordering(991) 00:12:42.260 fused_ordering(992) 00:12:42.260 fused_ordering(993) 00:12:42.261 fused_ordering(994) 00:12:42.261 fused_ordering(995) 00:12:42.261 fused_ordering(996) 00:12:42.261 fused_ordering(997) 00:12:42.261 fused_ordering(998) 00:12:42.261 fused_ordering(999) 00:12:42.261 fused_ordering(1000) 00:12:42.261 fused_ordering(1001) 00:12:42.261 fused_ordering(1002) 00:12:42.261 fused_ordering(1003) 00:12:42.261 fused_ordering(1004) 00:12:42.261 fused_ordering(1005) 00:12:42.261 fused_ordering(1006) 00:12:42.261 fused_ordering(1007) 00:12:42.261 fused_ordering(1008) 00:12:42.261 fused_ordering(1009) 00:12:42.261 fused_ordering(1010) 00:12:42.261 fused_ordering(1011) 00:12:42.261 fused_ordering(1012) 00:12:42.261 fused_ordering(1013) 00:12:42.261 fused_ordering(1014) 00:12:42.261 fused_ordering(1015) 00:12:42.261 fused_ordering(1016) 00:12:42.261 fused_ordering(1017) 00:12:42.261 fused_ordering(1018) 00:12:42.261 fused_ordering(1019) 00:12:42.261 fused_ordering(1020) 00:12:42.261 fused_ordering(1021) 00:12:42.261 fused_ordering(1022) 00:12:42.261 fused_ordering(1023) 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.261 rmmod nvme_tcp 00:12:42.261 rmmod nvme_fabrics 00:12:42.261 rmmod nvme_keyring 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1378567 ']' 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1378567 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1378567 ']' 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1378567 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.261 15:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1378567 00:12:42.261 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:42.261 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:42.261 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1378567' 00:12:42.261 killing process with pid 1378567 00:12:42.261 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1378567 00:12:42.261 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1378567 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.520 15:05:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.071 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.071 00:12:45.071 real 0m10.637s 00:12:45.071 user 0m4.986s 00:12:45.071 sys 0m5.776s 00:12:45.071 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:45.072 ************************************ 00:12:45.072 END TEST nvmf_fused_ordering 00:12:45.072 ************************************ 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.072 ************************************ 00:12:45.072 START TEST nvmf_ns_masking 00:12:45.072 ************************************ 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:45.072 * Looking for test storage... 00:12:45.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.072 --rc genhtml_branch_coverage=1 00:12:45.072 --rc genhtml_function_coverage=1 00:12:45.072 --rc genhtml_legend=1 00:12:45.072 --rc geninfo_all_blocks=1 00:12:45.072 --rc geninfo_unexecuted_blocks=1 00:12:45.072 00:12:45.072 ' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.072 --rc genhtml_branch_coverage=1 00:12:45.072 --rc genhtml_function_coverage=1 00:12:45.072 --rc genhtml_legend=1 00:12:45.072 --rc geninfo_all_blocks=1 00:12:45.072 --rc geninfo_unexecuted_blocks=1 00:12:45.072 00:12:45.072 ' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.072 --rc genhtml_branch_coverage=1 00:12:45.072 --rc genhtml_function_coverage=1 00:12:45.072 --rc genhtml_legend=1 00:12:45.072 --rc geninfo_all_blocks=1 00:12:45.072 --rc geninfo_unexecuted_blocks=1 00:12:45.072 00:12:45.072 ' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.072 --rc genhtml_branch_coverage=1 00:12:45.072 --rc genhtml_function_coverage=1 00:12:45.072 --rc genhtml_legend=1 00:12:45.072 --rc geninfo_all_blocks=1 00:12:45.072 --rc geninfo_unexecuted_blocks=1 00:12:45.072 00:12:45.072 ' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.072 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a7e512fb-211c-42bb-98e1-848fe56909e2 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2d041239-b798-46c9-8119-faf681fe7b6f 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9809ec36-8a11-4ed2-9200-7a1e462dbf58 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.073 15:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:51.643 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.643 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:51.643 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:51.644 Found net devices under 0000:af:00.0: cvl_0_0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:51.644 Found net devices under 0000:af:00.1: cvl_0_1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:12:51.644 00:12:51.644 --- 10.0.0.2 ping statistics --- 00:12:51.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.644 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:51.644 00:12:51.644 --- 10.0.0.1 ping statistics --- 00:12:51.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.644 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1382530 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1382530 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1382530 ']' 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.644 [2024-12-09 15:05:52.537188] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:12:51.644 [2024-12-09 15:05:52.537247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.644 [2024-12-09 15:05:52.617169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.644 [2024-12-09 15:05:52.655135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.644 [2024-12-09 15:05:52.655170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.644 [2024-12-09 15:05:52.655177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.644 [2024-12-09 15:05:52.655183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.644 [2024-12-09 15:05:52.655187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.644 [2024-12-09 15:05:52.655763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:51.644 [2024-12-09 15:05:52.959795] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:51.644 15:05:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:51.644 Malloc1 00:12:51.645 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.645 Malloc2 00:12:51.645 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.903 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:52.162 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.162 [2024-12-09 15:05:53.956261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.421 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:52.421 15:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9809ec36-8a11-4ed2-9200-7a1e462dbf58 -a 10.0.0.2 -s 4420 -i 4 00:12:52.421 15:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.421 15:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.421 15:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.421 15:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.421 15:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.956 [ 0]:0x1 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0a5e99cfc164f4b96ae1febd4288258 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0a5e99cfc164f4b96ae1febd4288258 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.956 [ 0]:0x1 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0a5e99cfc164f4b96ae1febd4288258 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0a5e99cfc164f4b96ae1febd4288258 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.956 [ 1]:0x2 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:54.956 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.216 15:05:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.474 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9809ec36-8a11-4ed2-9200-7a1e462dbf58 -a 10.0.0.2 -s 4420 -i 4 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:55.733 15:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.266 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.266 [ 0]:0x2 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.267 [ 0]:0x1 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0a5e99cfc164f4b96ae1febd4288258 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0a5e99cfc164f4b96ae1febd4288258 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.267 [ 1]:0x2 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.267 15:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.526 [ 0]:0x2 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.526 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.784 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:58.785 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9809ec36-8a11-4ed2-9200-7a1e462dbf58 -a 10.0.0.2 -s 4420 -i 4 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:59.043 15:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.947 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.206 [ 0]:0x1 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0a5e99cfc164f4b96ae1febd4288258 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0a5e99cfc164f4b96ae1febd4288258 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.206 [ 1]:0x2 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.206 15:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.465 [ 0]:0x2 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:01.465 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:01.724 [2024-12-09 15:06:03.379024] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:01.724 request: 00:13:01.724 { 00:13:01.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.724 "nsid": 2, 00:13:01.724 "host": "nqn.2016-06.io.spdk:host1", 00:13:01.724 "method": "nvmf_ns_remove_host", 00:13:01.724 "req_id": 1 00:13:01.724 } 00:13:01.724 Got JSON-RPC error response 00:13:01.724 response: 00:13:01.724 { 00:13:01.724 "code": -32602, 00:13:01.724 "message": "Invalid parameters" 00:13:01.724 } 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:01.724 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.725 [ 0]:0x2 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cad1cf2969b44956a353f0fd8ed90afd 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cad1cf2969b44956a353f0fd8ed90afd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:01.725 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1384627 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1384627 /var/tmp/host.sock 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1384627 ']' 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:01.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.984 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:01.984 [2024-12-09 15:06:03.606198] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:01.984 [2024-12-09 15:06:03.606248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384627 ] 00:13:01.984 [2024-12-09 15:06:03.681605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.984 [2024-12-09 15:06:03.722720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.243 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.243 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:02.243 15:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.502 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a7e512fb-211c-42bb-98e1-848fe56909e2 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A7E512FB211C42BB98E1848FE56909E2 -i 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2d041239-b798-46c9-8119-faf681fe7b6f 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:02.761 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2D041239B79846C98119FAF681FE7B6F -i 00:13:03.019 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:03.278 15:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:03.536 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:03.536 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:03.794 nvme0n1 00:13:03.795 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:03.795 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:04.053 nvme1n2 00:13:04.053 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:04.053 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:04.053 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:04.053 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:04.053 15:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:04.312 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:04.312 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:04.312 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:04.312 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:04.571 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a7e512fb-211c-42bb-98e1-848fe56909e2 == \a\7\e\5\1\2\f\b\-\2\1\1\c\-\4\2\b\b\-\9\8\e\1\-\8\4\8\f\e\5\6\9\0\9\e\2 ]] 00:13:04.571 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:04.571 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:04.571 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:04.830 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2d041239-b798-46c9-8119-faf681fe7b6f == \2\d\0\4\1\2\3\9\-\b\7\9\8\-\4\6\c\9\-\8\1\1\9\-\f\a\f\6\8\1\f\e\7\b\6\f ]] 00:13:04.830 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.830 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a7e512fb-211c-42bb-98e1-848fe56909e2 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A7E512FB211C42BB98E1848FE56909E2 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A7E512FB211C42BB98E1848FE56909E2 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:05.089 15:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A7E512FB211C42BB98E1848FE56909E2 00:13:05.348 [2024-12-09 15:06:06.997009] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:05.348 [2024-12-09 15:06:06.997040] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:05.348 [2024-12-09 15:06:06.997048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.348 request: 00:13:05.348 { 00:13:05.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.348 "namespace": { 00:13:05.348 "bdev_name": "invalid", 00:13:05.348 "nsid": 1, 00:13:05.348 "nguid": "A7E512FB211C42BB98E1848FE56909E2", 00:13:05.348 "no_auto_visible": false, 00:13:05.348 "hide_metadata": false 00:13:05.348 }, 00:13:05.348 "method": "nvmf_subsystem_add_ns", 00:13:05.348 "req_id": 1 00:13:05.348 } 00:13:05.348 Got JSON-RPC error response 00:13:05.348 response: 00:13:05.348 { 00:13:05.348 "code": -32602, 00:13:05.348 "message": "Invalid parameters" 00:13:05.348 } 00:13:05.348 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a7e512fb-211c-42bb-98e1-848fe56909e2 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.349 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A7E512FB211C42BB98E1848FE56909E2 -i 00:13:05.606 15:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:07.508 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:07.508 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:07.508 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1384627 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1384627 ']' 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1384627 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1384627 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1384627' 00:13:07.767 killing process with pid 1384627 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1384627 00:13:07.767 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1384627 00:13:08.026 15:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.284 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.284 rmmod nvme_tcp 00:13:08.284 rmmod nvme_fabrics 00:13:08.284 rmmod nvme_keyring 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1382530 ']' 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1382530 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1382530 ']' 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1382530 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382530 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382530' 00:13:08.543 killing process with pid 1382530 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1382530 00:13:08.543 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1382530 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.802 15:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.708 00:13:10.708 real 0m26.118s 00:13:10.708 user 0m31.201s 00:13:10.708 sys 0m7.046s 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.708 ************************************ 00:13:10.708 END TEST nvmf_ns_masking 00:13:10.708 ************************************ 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.708 15:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.967 ************************************ 00:13:10.967 START TEST nvmf_nvme_cli 00:13:10.967 ************************************ 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:10.967 * Looking for test storage... 00:13:10.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:10.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.967 --rc genhtml_branch_coverage=1 00:13:10.967 --rc genhtml_function_coverage=1 00:13:10.967 --rc genhtml_legend=1 00:13:10.967 --rc geninfo_all_blocks=1 00:13:10.967 --rc geninfo_unexecuted_blocks=1 00:13:10.967 00:13:10.967 ' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:10.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.967 --rc genhtml_branch_coverage=1 00:13:10.967 --rc genhtml_function_coverage=1 00:13:10.967 --rc genhtml_legend=1 00:13:10.967 --rc geninfo_all_blocks=1 00:13:10.967 --rc geninfo_unexecuted_blocks=1 00:13:10.967 00:13:10.967 ' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:10.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.967 --rc genhtml_branch_coverage=1 00:13:10.967 --rc genhtml_function_coverage=1 00:13:10.967 --rc genhtml_legend=1 00:13:10.967 --rc geninfo_all_blocks=1 00:13:10.967 --rc geninfo_unexecuted_blocks=1 00:13:10.967 00:13:10.967 ' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:10.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.967 --rc genhtml_branch_coverage=1 00:13:10.967 --rc genhtml_function_coverage=1 00:13:10.967 --rc genhtml_legend=1 00:13:10.967 --rc geninfo_all_blocks=1 00:13:10.967 --rc geninfo_unexecuted_blocks=1 00:13:10.967 00:13:10.967 ' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.967 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.968 15:06:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:17.539 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:17.539 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.539 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:17.540 Found net devices under 0000:af:00.0: cvl_0_0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:17.540 Found net devices under 0000:af:00.1: cvl_0_1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:13:17.540 00:13:17.540 --- 10.0.0.2 ping statistics --- 00:13:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.540 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:17.540 00:13:17.540 --- 10.0.0.1 ping statistics --- 00:13:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.540 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1389610 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1389610 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1389610 ']' 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 [2024-12-09 15:06:18.726831] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:17.540 [2024-12-09 15:06:18.726876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.540 [2024-12-09 15:06:18.802230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.540 [2024-12-09 15:06:18.842528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.540 [2024-12-09 15:06:18.842565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.540 [2024-12-09 15:06:18.842572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.540 [2024-12-09 15:06:18.842577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.540 [2024-12-09 15:06:18.842582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.540 [2024-12-09 15:06:18.844032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.540 [2024-12-09 15:06:18.846721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.540 [2024-12-09 15:06:18.846752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.540 [2024-12-09 15:06:18.846753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.540 15:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 [2024-12-09 15:06:18.995241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 Malloc0 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.540 Malloc1 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.540 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.541 [2024-12-09 15:06:19.094002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:17.541 00:13:17.541 Discovery Log Number of Records 2, Generation counter 2 00:13:17.541 =====Discovery Log Entry 0====== 00:13:17.541 trtype: tcp 00:13:17.541 adrfam: ipv4 00:13:17.541 subtype: current discovery subsystem 00:13:17.541 treq: not required 00:13:17.541 portid: 0 00:13:17.541 trsvcid: 4420 00:13:17.541 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:17.541 traddr: 10.0.0.2 00:13:17.541 eflags: explicit discovery connections, duplicate discovery information 00:13:17.541 sectype: none 00:13:17.541 =====Discovery Log Entry 1====== 00:13:17.541 trtype: tcp 00:13:17.541 adrfam: ipv4 00:13:17.541 subtype: nvme subsystem 00:13:17.541 treq: not required 00:13:17.541 portid: 0 00:13:17.541 trsvcid: 4420 00:13:17.541 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:17.541 traddr: 10.0.0.2 00:13:17.541 eflags: none 00:13:17.541 sectype: none 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:13:17.541 15:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:18.916 15:06:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.815 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:20.815 /dev/nvme0n2 00:13:20.815 /dev/nvme1n1 00:13:20.815 /dev/nvme1n2 ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:13:20.816 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.075 rmmod nvme_tcp 00:13:21.075 rmmod nvme_fabrics 00:13:21.075 rmmod nvme_keyring 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1389610 ']' 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1389610 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1389610 ']' 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1389610 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1389610 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1389610' 00:13:21.075 killing process with pid 1389610 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1389610 00:13:21.075 15:06:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1389610 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.334 15:06:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.871 00:13:23.871 real 0m12.577s 00:13:23.871 user 0m18.328s 00:13:23.871 sys 0m5.039s 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.871 ************************************ 00:13:23.871 END TEST nvmf_nvme_cli 00:13:23.871 ************************************ 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.871 ************************************ 00:13:23.871 START TEST nvmf_vfio_user 00:13:23.871 ************************************ 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:23.871 * Looking for test storage... 00:13:23.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:23.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.871 --rc genhtml_branch_coverage=1 00:13:23.871 --rc genhtml_function_coverage=1 00:13:23.871 --rc genhtml_legend=1 00:13:23.871 --rc geninfo_all_blocks=1 00:13:23.871 --rc geninfo_unexecuted_blocks=1 00:13:23.871 00:13:23.871 ' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:23.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.871 --rc genhtml_branch_coverage=1 00:13:23.871 --rc genhtml_function_coverage=1 00:13:23.871 --rc genhtml_legend=1 00:13:23.871 --rc geninfo_all_blocks=1 00:13:23.871 --rc geninfo_unexecuted_blocks=1 00:13:23.871 00:13:23.871 ' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:23.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.871 --rc genhtml_branch_coverage=1 00:13:23.871 --rc genhtml_function_coverage=1 00:13:23.871 --rc genhtml_legend=1 00:13:23.871 --rc geninfo_all_blocks=1 00:13:23.871 --rc geninfo_unexecuted_blocks=1 00:13:23.871 00:13:23.871 ' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:23.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.871 --rc genhtml_branch_coverage=1 00:13:23.871 --rc genhtml_function_coverage=1 00:13:23.871 --rc genhtml_legend=1 00:13:23.871 --rc geninfo_all_blocks=1 00:13:23.871 --rc geninfo_unexecuted_blocks=1 00:13:23.871 00:13:23.871 ' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.871 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1390744 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1390744' 00:13:23.872 Process pid: 1390744 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1390744 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1390744 ']' 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.872 [2024-12-09 15:06:25.414988] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:23.872 [2024-12-09 15:06:25.415035] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.872 [2024-12-09 15:06:25.488068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.872 [2024-12-09 15:06:25.526992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.872 [2024-12-09 15:06:25.527027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.872 [2024-12-09 15:06:25.527039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.872 [2024-12-09 15:06:25.527044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.872 [2024-12-09 15:06:25.527049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.872 [2024-12-09 15:06:25.528447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.872 [2024-12-09 15:06:25.528561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.872 [2024-12-09 15:06:25.528665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.872 [2024-12-09 15:06:25.528667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:23.872 15:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:25.249 15:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.508 Malloc1 00:13:25.508 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:25.508 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:25.767 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.025 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.025 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.026 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.284 Malloc2 00:13:26.284 15:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:26.544 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:26.544 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:26.803 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:26.803 [2024-12-09 15:06:28.522599] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:26.803 [2024-12-09 15:06:28.522632] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1391370 ] 00:13:26.803 [2024-12-09 15:06:28.563685] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:26.803 [2024-12-09 15:06:28.566030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:26.803 [2024-12-09 15:06:28.566052] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc33d587000 00:13:26.803 [2024-12-09 15:06:28.567035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.568033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.569045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.570045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.571049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.572060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.573064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.574064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:26.803 [2024-12-09 15:06:28.575068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:26.803 [2024-12-09 15:06:28.575077] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc33d57c000 00:13:26.803 [2024-12-09 15:06:28.575991] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:26.803 [2024-12-09 15:06:28.588484] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:26.803 [2024-12-09 15:06:28.588511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:26.803 [2024-12-09 15:06:28.594179] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:26.803 [2024-12-09 15:06:28.594212] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:26.803 [2024-12-09 15:06:28.594282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:26.803 [2024-12-09 15:06:28.594296] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:26.803 [2024-12-09 15:06:28.594305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:26.803 [2024-12-09 15:06:28.595178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:26.803 [2024-12-09 15:06:28.595188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:26.803 [2024-12-09 15:06:28.595195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:26.803 [2024-12-09 15:06:28.596185] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:26.803 [2024-12-09 15:06:28.596193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:26.803 [2024-12-09 15:06:28.596199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:26.803 [2024-12-09 15:06:28.597188] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:26.803 [2024-12-09 15:06:28.597197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:27.064 [2024-12-09 15:06:28.598192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:27.064 [2024-12-09 15:06:28.598200] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:27.064 [2024-12-09 15:06:28.598205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:27.064 [2024-12-09 15:06:28.598211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:27.064 [2024-12-09 15:06:28.598322] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:27.064 [2024-12-09 15:06:28.598326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:27.064 [2024-12-09 15:06:28.598331] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:27.064 [2024-12-09 15:06:28.599198] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:27.064 [2024-12-09 15:06:28.600203] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:27.064 [2024-12-09 15:06:28.601210] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.064 [2024-12-09 15:06:28.602211] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.064 [2024-12-09 15:06:28.602276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:27.064 [2024-12-09 15:06:28.603224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:27.064 [2024-12-09 15:06:28.603232] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:27.064 [2024-12-09 15:06:28.603236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:27.064 [2024-12-09 15:06:28.603265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603282] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.064 [2024-12-09 15:06:28.603286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.064 [2024-12-09 15:06:28.603290] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.064 [2024-12-09 15:06:28.603302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.064 [2024-12-09 15:06:28.603339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:27.064 [2024-12-09 15:06:28.603348] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:27.064 [2024-12-09 15:06:28.603353] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:27.064 [2024-12-09 15:06:28.603356] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:27.064 [2024-12-09 15:06:28.603361] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:27.064 [2024-12-09 15:06:28.603365] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:27.064 [2024-12-09 15:06:28.603369] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:27.064 [2024-12-09 15:06:28.603373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:27.064 [2024-12-09 15:06:28.603399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:27.064 [2024-12-09 15:06:28.603409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.064 [2024-12-09 15:06:28.603417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.064 [2024-12-09 15:06:28.603424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.064 [2024-12-09 15:06:28.603432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.064 [2024-12-09 15:06:28.603436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:27.064 [2024-12-09 15:06:28.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:27.064 [2024-12-09 15:06:28.603466] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:27.064 [2024-12-09 15:06:28.603472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.064 [2024-12-09 15:06:28.603500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:27.064 [2024-12-09 15:06:28.603549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603563] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:27.064 [2024-12-09 15:06:28.603567] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:27.064 [2024-12-09 15:06:28.603570] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.064 [2024-12-09 15:06:28.603576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:27.064 [2024-12-09 15:06:28.603587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:27.064 [2024-12-09 15:06:28.603595] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:27.064 [2024-12-09 15:06:28.603603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:27.064 [2024-12-09 15:06:28.603610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603616] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.065 [2024-12-09 15:06:28.603619] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.065 [2024-12-09 15:06:28.603622] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.065 [2024-12-09 15:06:28.603628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603675] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:27.065 [2024-12-09 15:06:28.603679] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.065 [2024-12-09 15:06:28.603682] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.065 [2024-12-09 15:06:28.603687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603737] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:27.065 [2024-12-09 15:06:28.603741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:27.065 [2024-12-09 15:06:28.603746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:27.065 [2024-12-09 15:06:28.603762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:27.065 [2024-12-09 15:06:28.603852] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:27.065 [2024-12-09 15:06:28.603856] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:27.065 [2024-12-09 15:06:28.603861] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:27.065 [2024-12-09 15:06:28.603866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:27.065 [2024-12-09 15:06:28.603873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:27.065 [2024-12-09 15:06:28.603880] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:27.065 [2024-12-09 15:06:28.603886] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:27.065 [2024-12-09 15:06:28.603893] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.065 [2024-12-09 15:06:28.603901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603908] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:27.065 [2024-12-09 15:06:28.603915] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:27.065 [2024-12-09 15:06:28.603919] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.065 [2024-12-09 15:06:28.603927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603935] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:27.065 [2024-12-09 15:06:28.603939] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:27.065 [2024-12-09 15:06:28.603943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:27.065 [2024-12-09 15:06:28.603949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:27.065 [2024-12-09 15:06:28.603955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:27.065 [2024-12-09 15:06:28.603982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:27.065 ===================================================== 00:13:27.065 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.065 ===================================================== 00:13:27.065 Controller Capabilities/Features 00:13:27.065 ================================ 00:13:27.065 Vendor ID: 4e58 00:13:27.065 Subsystem Vendor ID: 4e58 00:13:27.065 Serial Number: SPDK1 00:13:27.065 Model Number: SPDK bdev Controller 00:13:27.065 Firmware Version: 25.01 00:13:27.065 Recommended Arb Burst: 6 00:13:27.065 IEEE OUI Identifier: 8d 6b 50 00:13:27.065 Multi-path I/O 00:13:27.065 May have multiple subsystem ports: Yes 00:13:27.065 May have multiple controllers: Yes 00:13:27.065 Associated with SR-IOV VF: No 00:13:27.065 Max Data Transfer Size: 131072 00:13:27.065 Max Number of Namespaces: 32 00:13:27.065 Max Number of I/O Queues: 127 00:13:27.065 NVMe Specification Version (VS): 1.3 00:13:27.065 NVMe Specification Version (Identify): 1.3 00:13:27.065 Maximum Queue Entries: 256 00:13:27.065 Contiguous Queues Required: Yes 00:13:27.065 Arbitration Mechanisms Supported 00:13:27.065 Weighted Round Robin: Not Supported 00:13:27.065 Vendor Specific: Not Supported 00:13:27.065 Reset Timeout: 15000 ms 00:13:27.065 Doorbell Stride: 4 bytes 00:13:27.065 NVM Subsystem Reset: Not Supported 00:13:27.065 Command Sets Supported 00:13:27.065 NVM Command Set: Supported 00:13:27.065 Boot Partition: Not Supported 00:13:27.065 Memory Page Size Minimum: 4096 bytes 00:13:27.065 Memory Page Size Maximum: 4096 bytes 00:13:27.065 Persistent Memory Region: Not Supported 00:13:27.065 Optional Asynchronous Events Supported 00:13:27.065 Namespace Attribute Notices: Supported 00:13:27.065 Firmware Activation Notices: Not Supported 00:13:27.065 ANA Change Notices: Not Supported 00:13:27.065 PLE Aggregate Log Change Notices: Not Supported 00:13:27.065 LBA Status Info Alert Notices: Not Supported 00:13:27.065 EGE Aggregate Log Change Notices: Not Supported 00:13:27.065 Normal NVM Subsystem Shutdown event: Not Supported 00:13:27.065 Zone Descriptor Change Notices: Not Supported 00:13:27.065 Discovery Log Change Notices: Not Supported 00:13:27.065 Controller Attributes 00:13:27.065 128-bit Host Identifier: Supported 00:13:27.065 Non-Operational Permissive Mode: Not Supported 00:13:27.065 NVM Sets: Not Supported 00:13:27.065 Read Recovery Levels: Not Supported 00:13:27.065 Endurance Groups: Not Supported 00:13:27.065 Predictable Latency Mode: Not Supported 00:13:27.065 Traffic Based Keep ALive: Not Supported 00:13:27.065 Namespace Granularity: Not Supported 00:13:27.065 SQ Associations: Not Supported 00:13:27.065 UUID List: Not Supported 00:13:27.065 Multi-Domain Subsystem: Not Supported 00:13:27.065 Fixed Capacity Management: Not Supported 00:13:27.065 Variable Capacity Management: Not Supported 00:13:27.065 Delete Endurance Group: Not Supported 00:13:27.065 Delete NVM Set: Not Supported 00:13:27.065 Extended LBA Formats Supported: Not Supported 00:13:27.065 Flexible Data Placement Supported: Not Supported 00:13:27.065 00:13:27.065 Controller Memory Buffer Support 00:13:27.065 ================================ 00:13:27.065 Supported: No 00:13:27.065 00:13:27.065 Persistent Memory Region Support 00:13:27.065 ================================ 00:13:27.065 Supported: No 00:13:27.065 00:13:27.065 Admin Command Set Attributes 00:13:27.065 ============================ 00:13:27.065 Security Send/Receive: Not Supported 00:13:27.065 Format NVM: Not Supported 00:13:27.065 Firmware Activate/Download: Not Supported 00:13:27.065 Namespace Management: Not Supported 00:13:27.065 Device Self-Test: Not Supported 00:13:27.066 Directives: Not Supported 00:13:27.066 NVMe-MI: Not Supported 00:13:27.066 Virtualization Management: Not Supported 00:13:27.066 Doorbell Buffer Config: Not Supported 00:13:27.066 Get LBA Status Capability: Not Supported 00:13:27.066 Command & Feature Lockdown Capability: Not Supported 00:13:27.066 Abort Command Limit: 4 00:13:27.066 Async Event Request Limit: 4 00:13:27.066 Number of Firmware Slots: N/A 00:13:27.066 Firmware Slot 1 Read-Only: N/A 00:13:27.066 Firmware Activation Without Reset: N/A 00:13:27.066 Multiple Update Detection Support: N/A 00:13:27.066 Firmware Update Granularity: No Information Provided 00:13:27.066 Per-Namespace SMART Log: No 00:13:27.066 Asymmetric Namespace Access Log Page: Not Supported 00:13:27.066 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:27.066 Command Effects Log Page: Supported 00:13:27.066 Get Log Page Extended Data: Supported 00:13:27.066 Telemetry Log Pages: Not Supported 00:13:27.066 Persistent Event Log Pages: Not Supported 00:13:27.066 Supported Log Pages Log Page: May Support 00:13:27.066 Commands Supported & Effects Log Page: Not Supported 00:13:27.066 Feature Identifiers & Effects Log Page:May Support 00:13:27.066 NVMe-MI Commands & Effects Log Page: May Support 00:13:27.066 Data Area 4 for Telemetry Log: Not Supported 00:13:27.066 Error Log Page Entries Supported: 128 00:13:27.066 Keep Alive: Supported 00:13:27.066 Keep Alive Granularity: 10000 ms 00:13:27.066 00:13:27.066 NVM Command Set Attributes 00:13:27.066 ========================== 00:13:27.066 Submission Queue Entry Size 00:13:27.066 Max: 64 00:13:27.066 Min: 64 00:13:27.066 Completion Queue Entry Size 00:13:27.066 Max: 16 00:13:27.066 Min: 16 00:13:27.066 Number of Namespaces: 32 00:13:27.066 Compare Command: Supported 00:13:27.066 Write Uncorrectable Command: Not Supported 00:13:27.066 Dataset Management Command: Supported 00:13:27.066 Write Zeroes Command: Supported 00:13:27.066 Set Features Save Field: Not Supported 00:13:27.066 Reservations: Not Supported 00:13:27.066 Timestamp: Not Supported 00:13:27.066 Copy: Supported 00:13:27.066 Volatile Write Cache: Present 00:13:27.066 Atomic Write Unit (Normal): 1 00:13:27.066 Atomic Write Unit (PFail): 1 00:13:27.066 Atomic Compare & Write Unit: 1 00:13:27.066 Fused Compare & Write: Supported 00:13:27.066 Scatter-Gather List 00:13:27.066 SGL Command Set: Supported (Dword aligned) 00:13:27.066 SGL Keyed: Not Supported 00:13:27.066 SGL Bit Bucket Descriptor: Not Supported 00:13:27.066 SGL Metadata Pointer: Not Supported 00:13:27.066 Oversized SGL: Not Supported 00:13:27.066 SGL Metadata Address: Not Supported 00:13:27.066 SGL Offset: Not Supported 00:13:27.066 Transport SGL Data Block: Not Supported 00:13:27.066 Replay Protected Memory Block: Not Supported 00:13:27.066 00:13:27.066 Firmware Slot Information 00:13:27.066 ========================= 00:13:27.066 Active slot: 1 00:13:27.066 Slot 1 Firmware Revision: 25.01 00:13:27.066 00:13:27.066 00:13:27.066 Commands Supported and Effects 00:13:27.066 ============================== 00:13:27.066 Admin Commands 00:13:27.066 -------------- 00:13:27.066 Get Log Page (02h): Supported 00:13:27.066 Identify (06h): Supported 00:13:27.066 Abort (08h): Supported 00:13:27.066 Set Features (09h): Supported 00:13:27.066 Get Features (0Ah): Supported 00:13:27.066 Asynchronous Event Request (0Ch): Supported 00:13:27.066 Keep Alive (18h): Supported 00:13:27.066 I/O Commands 00:13:27.066 ------------ 00:13:27.066 Flush (00h): Supported LBA-Change 00:13:27.066 Write (01h): Supported LBA-Change 00:13:27.066 Read (02h): Supported 00:13:27.066 Compare (05h): Supported 00:13:27.066 Write Zeroes (08h): Supported LBA-Change 00:13:27.066 Dataset Management (09h): Supported LBA-Change 00:13:27.066 Copy (19h): Supported LBA-Change 00:13:27.066 00:13:27.066 Error Log 00:13:27.066 ========= 00:13:27.066 00:13:27.066 Arbitration 00:13:27.066 =========== 00:13:27.066 Arbitration Burst: 1 00:13:27.066 00:13:27.066 Power Management 00:13:27.066 ================ 00:13:27.066 Number of Power States: 1 00:13:27.066 Current Power State: Power State #0 00:13:27.066 Power State #0: 00:13:27.066 Max Power: 0.00 W 00:13:27.066 Non-Operational State: Operational 00:13:27.066 Entry Latency: Not Reported 00:13:27.066 Exit Latency: Not Reported 00:13:27.066 Relative Read Throughput: 0 00:13:27.066 Relative Read Latency: 0 00:13:27.066 Relative Write Throughput: 0 00:13:27.066 Relative Write Latency: 0 00:13:27.066 Idle Power: Not Reported 00:13:27.066 Active Power: Not Reported 00:13:27.066 Non-Operational Permissive Mode: Not Supported 00:13:27.066 00:13:27.066 Health Information 00:13:27.066 ================== 00:13:27.066 Critical Warnings: 00:13:27.066 Available Spare Space: OK 00:13:27.066 Temperature: OK 00:13:27.066 Device Reliability: OK 00:13:27.066 Read Only: No 00:13:27.066 Volatile Memory Backup: OK 00:13:27.066 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:27.066 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:27.066 Available Spare: 0% 00:13:27.066 Available Sp[2024-12-09 15:06:28.604061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:27.066 [2024-12-09 15:06:28.604070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:27.066 [2024-12-09 15:06:28.604094] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:27.066 [2024-12-09 15:06:28.604103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.066 [2024-12-09 15:06:28.604108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.066 [2024-12-09 15:06:28.604113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.066 [2024-12-09 15:06:28.604119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.066 [2024-12-09 15:06:28.604229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:27.066 [2024-12-09 15:06:28.604239] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:27.066 [2024-12-09 15:06:28.605234] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.066 [2024-12-09 15:06:28.605281] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:27.066 [2024-12-09 15:06:28.605288] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:27.066 [2024-12-09 15:06:28.606238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:27.066 [2024-12-09 15:06:28.606252] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:27.066 [2024-12-09 15:06:28.606301] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:27.066 [2024-12-09 15:06:28.609223] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:27.066 are Threshold: 0% 00:13:27.066 Life Percentage Used: 0% 00:13:27.066 Data Units Read: 0 00:13:27.066 Data Units Written: 0 00:13:27.066 Host Read Commands: 0 00:13:27.066 Host Write Commands: 0 00:13:27.066 Controller Busy Time: 0 minutes 00:13:27.066 Power Cycles: 0 00:13:27.066 Power On Hours: 0 hours 00:13:27.066 Unsafe Shutdowns: 0 00:13:27.066 Unrecoverable Media Errors: 0 00:13:27.066 Lifetime Error Log Entries: 0 00:13:27.066 Warning Temperature Time: 0 minutes 00:13:27.066 Critical Temperature Time: 0 minutes 00:13:27.066 00:13:27.066 Number of Queues 00:13:27.066 ================ 00:13:27.066 Number of I/O Submission Queues: 127 00:13:27.066 Number of I/O Completion Queues: 127 00:13:27.066 00:13:27.066 Active Namespaces 00:13:27.066 ================= 00:13:27.066 Namespace ID:1 00:13:27.066 Error Recovery Timeout: Unlimited 00:13:27.066 Command Set Identifier: NVM (00h) 00:13:27.066 Deallocate: Supported 00:13:27.066 Deallocated/Unwritten Error: Not Supported 00:13:27.066 Deallocated Read Value: Unknown 00:13:27.066 Deallocate in Write Zeroes: Not Supported 00:13:27.066 Deallocated Guard Field: 0xFFFF 00:13:27.066 Flush: Supported 00:13:27.066 Reservation: Supported 00:13:27.066 Namespace Sharing Capabilities: Multiple Controllers 00:13:27.066 Size (in LBAs): 131072 (0GiB) 00:13:27.066 Capacity (in LBAs): 131072 (0GiB) 00:13:27.067 Utilization (in LBAs): 131072 (0GiB) 00:13:27.067 NGUID: F8251C6C346046A889C6D561F97EDCEC 00:13:27.067 UUID: f8251c6c-3460-46a8-89c6-d561f97edcec 00:13:27.067 Thin Provisioning: Not Supported 00:13:27.067 Per-NS Atomic Units: Yes 00:13:27.067 Atomic Boundary Size (Normal): 0 00:13:27.067 Atomic Boundary Size (PFail): 0 00:13:27.067 Atomic Boundary Offset: 0 00:13:27.067 Maximum Single Source Range Length: 65535 00:13:27.067 Maximum Copy Length: 65535 00:13:27.067 Maximum Source Range Count: 1 00:13:27.067 NGUID/EUI64 Never Reused: No 00:13:27.067 Namespace Write Protected: No 00:13:27.067 Number of LBA Formats: 1 00:13:27.067 Current LBA Format: LBA Format #00 00:13:27.067 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:27.067 00:13:27.067 15:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:27.067 [2024-12-09 15:06:28.839261] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.341 Initializing NVMe Controllers 00:13:32.341 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:32.341 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:32.341 Initialization complete. Launching workers. 00:13:32.341 ======================================================== 00:13:32.341 Latency(us) 00:13:32.341 Device Information : IOPS MiB/s Average min max 00:13:32.341 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39953.98 156.07 3204.32 971.66 10362.37 00:13:32.341 ======================================================== 00:13:32.341 Total : 39953.98 156.07 3204.32 971.66 10362.37 00:13:32.341 00:13:32.341 [2024-12-09 15:06:33.864456] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.341 15:06:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:32.341 [2024-12-09 15:06:34.095518] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:37.617 Initializing NVMe Controllers 00:13:37.617 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:37.617 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:37.617 Initialization complete. Launching workers. 00:13:37.617 ======================================================== 00:13:37.617 Latency(us) 00:13:37.617 Device Information : IOPS MiB/s Average min max 00:13:37.617 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16040.36 62.66 7979.20 5994.52 15484.03 00:13:37.617 ======================================================== 00:13:37.617 Total : 16040.36 62.66 7979.20 5994.52 15484.03 00:13:37.617 00:13:37.617 [2024-12-09 15:06:39.129791] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:37.617 15:06:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:37.617 [2024-12-09 15:06:39.330754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.894 [2024-12-09 15:06:44.393520] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.894 Initializing NVMe Controllers 00:13:42.894 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.894 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:42.895 Initialization complete. Launching workers. 00:13:42.895 Starting thread on core 2 00:13:42.895 Starting thread on core 3 00:13:42.895 Starting thread on core 1 00:13:42.895 15:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:42.895 [2024-12-09 15:06:44.682584] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.188 [2024-12-09 15:06:47.747139] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.188 Initializing NVMe Controllers 00:13:46.188 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.188 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:46.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:46.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:46.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:46.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:46.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:46.188 Initialization complete. Launching workers. 00:13:46.188 Starting thread on core 1 with urgent priority queue 00:13:46.188 Starting thread on core 2 with urgent priority queue 00:13:46.188 Starting thread on core 3 with urgent priority queue 00:13:46.188 Starting thread on core 0 with urgent priority queue 00:13:46.188 SPDK bdev Controller (SPDK1 ) core 0: 9737.67 IO/s 10.27 secs/100000 ios 00:13:46.188 SPDK bdev Controller (SPDK1 ) core 1: 8504.00 IO/s 11.76 secs/100000 ios 00:13:46.188 SPDK bdev Controller (SPDK1 ) core 2: 8226.00 IO/s 12.16 secs/100000 ios 00:13:46.188 SPDK bdev Controller (SPDK1 ) core 3: 7418.33 IO/s 13.48 secs/100000 ios 00:13:46.188 ======================================================== 00:13:46.188 00:13:46.188 15:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.447 [2024-12-09 15:06:48.029987] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:46.447 Initializing NVMe Controllers 00:13:46.447 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.447 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:46.447 Namespace ID: 1 size: 0GB 00:13:46.447 Initialization complete. 00:13:46.447 INFO: using host memory buffer for IO 00:13:46.447 Hello world! 00:13:46.447 [2024-12-09 15:06:48.064225] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:46.447 15:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:46.706 [2024-12-09 15:06:48.347592] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.643 Initializing NVMe Controllers 00:13:47.643 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.643 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.643 Initialization complete. Launching workers. 00:13:47.643 submit (in ns) avg, min, max = 6738.5, 3121.9, 3999641.9 00:13:47.643 complete (in ns) avg, min, max = 20309.2, 1717.1, 4007154.3 00:13:47.643 00:13:47.643 Submit histogram 00:13:47.643 ================ 00:13:47.643 Range in us Cumulative Count 00:13:47.643 3.109 - 3.124: 0.0060% ( 1) 00:13:47.643 3.124 - 3.139: 0.0120% ( 1) 00:13:47.643 3.139 - 3.154: 0.0421% ( 5) 00:13:47.643 3.154 - 3.170: 0.1023% ( 10) 00:13:47.643 3.170 - 3.185: 0.1986% ( 16) 00:13:47.643 3.185 - 3.200: 0.5778% ( 63) 00:13:47.643 3.200 - 3.215: 2.4257% ( 307) 00:13:47.643 3.215 - 3.230: 6.3440% ( 651) 00:13:47.643 3.230 - 3.246: 11.4241% ( 844) 00:13:47.643 3.246 - 3.261: 16.7810% ( 890) 00:13:47.643 3.261 - 3.276: 22.8362% ( 1006) 00:13:47.643 3.276 - 3.291: 29.0418% ( 1031) 00:13:47.643 3.291 - 3.307: 35.0427% ( 997) 00:13:47.643 3.307 - 3.322: 40.8992% ( 973) 00:13:47.643 3.322 - 3.337: 46.3826% ( 911) 00:13:47.643 3.337 - 3.352: 52.2331% ( 972) 00:13:47.643 3.352 - 3.368: 58.4507% ( 1033) 00:13:47.643 3.368 - 3.383: 65.8661% ( 1232) 00:13:47.643 3.383 - 3.398: 71.1569% ( 879) 00:13:47.643 3.398 - 3.413: 76.7786% ( 934) 00:13:47.643 3.413 - 3.429: 80.7452% ( 659) 00:13:47.643 3.429 - 3.444: 83.7306% ( 496) 00:13:47.643 3.444 - 3.459: 85.4280% ( 282) 00:13:47.643 3.459 - 3.474: 86.4813% ( 175) 00:13:47.643 3.474 - 3.490: 87.4202% ( 156) 00:13:47.643 3.490 - 3.505: 87.9981% ( 96) 00:13:47.643 3.505 - 3.520: 88.6481% ( 108) 00:13:47.643 3.520 - 3.535: 89.4547% ( 134) 00:13:47.643 3.535 - 3.550: 90.1770% ( 120) 00:13:47.643 3.550 - 3.566: 91.1340% ( 159) 00:13:47.643 3.566 - 3.581: 92.0910% ( 159) 00:13:47.643 3.581 - 3.596: 93.0661% ( 162) 00:13:47.643 3.596 - 3.611: 93.8606% ( 132) 00:13:47.643 3.611 - 3.627: 94.9380% ( 179) 00:13:47.643 3.627 - 3.642: 95.7626% ( 137) 00:13:47.643 3.642 - 3.657: 96.6715% ( 151) 00:13:47.643 3.657 - 3.672: 97.3757% ( 117) 00:13:47.643 3.672 - 3.688: 97.8753% ( 83) 00:13:47.643 3.688 - 3.703: 98.3809% ( 84) 00:13:47.643 3.703 - 3.718: 98.7300% ( 58) 00:13:47.643 3.718 - 3.733: 98.9948% ( 44) 00:13:47.643 3.733 - 3.749: 99.2296% ( 39) 00:13:47.643 3.749 - 3.764: 99.3800% ( 25) 00:13:47.643 3.764 - 3.779: 99.5004% ( 20) 00:13:47.643 3.779 - 3.794: 99.5546% ( 9) 00:13:47.643 3.794 - 3.810: 99.6027% ( 8) 00:13:47.643 3.810 - 3.825: 99.6268% ( 4) 00:13:47.643 3.825 - 3.840: 99.6328% ( 1) 00:13:47.643 3.840 - 3.855: 99.6389% ( 1) 00:13:47.643 3.992 - 4.023: 99.6449% ( 1) 00:13:47.643 4.084 - 4.114: 99.6509% ( 1) 00:13:47.643 5.181 - 5.211: 99.6569% ( 1) 00:13:47.643 5.242 - 5.272: 99.6629% ( 1) 00:13:47.643 5.394 - 5.425: 99.6690% ( 1) 00:13:47.643 5.425 - 5.455: 99.6750% ( 1) 00:13:47.643 5.516 - 5.547: 99.6810% ( 1) 00:13:47.643 5.547 - 5.577: 99.6870% ( 1) 00:13:47.643 5.608 - 5.638: 99.6930% ( 1) 00:13:47.643 5.638 - 5.669: 99.6990% ( 1) 00:13:47.643 5.669 - 5.699: 99.7111% ( 2) 00:13:47.643 5.699 - 5.730: 99.7171% ( 1) 00:13:47.643 5.730 - 5.760: 99.7291% ( 2) 00:13:47.643 5.943 - 5.973: 99.7352% ( 1) 00:13:47.643 5.973 - 6.004: 99.7412% ( 1) 00:13:47.643 6.004 - 6.034: 99.7472% ( 1) 00:13:47.643 6.278 - 6.309: 99.7532% ( 1) 00:13:47.643 6.309 - 6.339: 99.7592% ( 1) 00:13:47.643 6.430 - 6.461: 99.7653% ( 1) 00:13:47.643 6.461 - 6.491: 99.7713% ( 1) 00:13:47.643 6.613 - 6.644: 99.7773% ( 1) 00:13:47.644 6.644 - 6.674: 99.7833% ( 1) 00:13:47.644 6.674 - 6.705: 99.7893% ( 1) 00:13:47.644 6.796 - 6.827: 99.7954% ( 1) 00:13:47.644 6.857 - 6.888: 99.8014% ( 1) 00:13:47.644 6.918 - 6.949: 99.8074% ( 1) 00:13:47.644 6.949 - 6.979: 99.8194% ( 2) 00:13:47.644 6.979 - 7.010: 99.8375% ( 3) 00:13:47.644 7.101 - 7.131: 99.8435% ( 1) 00:13:47.644 7.192 - 7.223: 99.8495% ( 1) 00:13:47.644 [2024-12-09 15:06:49.369543] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.644 7.253 - 7.284: 99.8555% ( 1) 00:13:47.644 7.284 - 7.314: 99.8616% ( 1) 00:13:47.644 7.314 - 7.345: 99.8676% ( 1) 00:13:47.644 7.345 - 7.375: 99.8736% ( 1) 00:13:47.644 7.375 - 7.406: 99.8796% ( 1) 00:13:47.644 7.467 - 7.497: 99.8856% ( 1) 00:13:47.644 7.802 - 7.863: 99.8917% ( 1) 00:13:47.644 8.046 - 8.107: 99.8977% ( 1) 00:13:47.644 8.533 - 8.594: 99.9037% ( 1) 00:13:47.644 10.301 - 10.362: 99.9097% ( 1) 00:13:47.644 13.531 - 13.592: 99.9157% ( 1) 00:13:47.644 3994.575 - 4025.783: 100.0000% ( 14) 00:13:47.644 00:13:47.644 Complete histogram 00:13:47.644 ================== 00:13:47.644 Range in us Cumulative Count 00:13:47.644 1.714 - 1.722: 0.0241% ( 4) 00:13:47.644 1.722 - 1.730: 0.2588% ( 39) 00:13:47.644 1.730 - 1.737: 0.6862% ( 71) 00:13:47.644 1.737 - 1.745: 1.0353% ( 58) 00:13:47.644 1.745 - 1.752: 1.1195% ( 14) 00:13:47.644 1.752 - 1.760: 1.1978% ( 13) 00:13:47.644 1.760 - 1.768: 1.5830% ( 64) 00:13:47.644 1.768 - 1.775: 6.8195% ( 870) 00:13:47.644 1.775 - 1.783: 31.5637% ( 4111) 00:13:47.644 1.783 - 1.790: 66.9255% ( 5875) 00:13:47.644 1.790 - 1.798: 84.4589% ( 2913) 00:13:47.644 1.798 - 1.806: 89.5690% ( 849) 00:13:47.644 1.806 - 1.813: 92.5123% ( 489) 00:13:47.644 1.813 - 1.821: 94.1315% ( 269) 00:13:47.644 1.821 - 1.829: 94.5708% ( 73) 00:13:47.644 1.829 - 1.836: 94.9440% ( 62) 00:13:47.644 1.836 - 1.844: 95.4496% ( 84) 00:13:47.644 1.844 - 1.851: 96.1839% ( 122) 00:13:47.644 1.851 - 1.859: 97.2072% ( 170) 00:13:47.644 1.859 - 1.867: 98.1221% ( 152) 00:13:47.644 1.867 - 1.874: 98.7902% ( 111) 00:13:47.644 1.874 - 1.882: 99.0309% ( 40) 00:13:47.644 1.882 - 1.890: 99.1393% ( 18) 00:13:47.644 1.890 - 1.897: 99.2055% ( 11) 00:13:47.644 1.897 - 1.905: 99.2416% ( 6) 00:13:47.644 1.905 - 1.912: 99.2717% ( 5) 00:13:47.644 1.920 - 1.928: 99.2777% ( 1) 00:13:47.644 1.928 - 1.935: 99.2837% ( 1) 00:13:47.644 1.943 - 1.950: 99.2898% ( 1) 00:13:47.644 1.950 - 1.966: 99.3259% ( 6) 00:13:47.644 1.966 - 1.981: 99.3379% ( 2) 00:13:47.644 1.981 - 1.996: 99.3439% ( 1) 00:13:47.644 1.996 - 2.011: 99.3499% ( 1) 00:13:47.644 2.057 - 2.072: 99.3620% ( 2) 00:13:47.644 2.164 - 2.179: 99.3680% ( 1) 00:13:47.644 2.240 - 2.255: 99.3740% ( 1) 00:13:47.644 2.392 - 2.408: 99.3800% ( 1) 00:13:47.644 3.322 - 3.337: 99.3861% ( 1) 00:13:47.644 3.810 - 3.825: 99.3921% ( 1) 00:13:47.644 3.825 - 3.840: 99.3981% ( 1) 00:13:47.644 4.206 - 4.236: 99.4041% ( 1) 00:13:47.644 4.267 - 4.297: 99.4101% ( 1) 00:13:47.644 4.328 - 4.358: 99.4222% ( 2) 00:13:47.644 4.419 - 4.450: 99.4282% ( 1) 00:13:47.644 4.450 - 4.480: 99.4342% ( 1) 00:13:47.644 4.571 - 4.602: 99.4402% ( 1) 00:13:47.644 4.602 - 4.632: 99.4523% ( 2) 00:13:47.644 4.693 - 4.724: 99.4583% ( 1) 00:13:47.644 4.846 - 4.876: 99.4643% ( 1) 00:13:47.644 5.120 - 5.150: 99.4703% ( 1) 00:13:47.644 5.364 - 5.394: 99.4763% ( 1) 00:13:47.644 5.547 - 5.577: 99.4824% ( 1) 00:13:47.644 5.608 - 5.638: 99.4944% ( 2) 00:13:47.644 5.760 - 5.790: 99.5004% ( 1) 00:13:47.644 6.004 - 6.034: 99.5064% ( 1) 00:13:47.644 6.187 - 6.217: 99.5125% ( 1) 00:13:47.644 6.613 - 6.644: 99.5185% ( 1) 00:13:47.644 6.644 - 6.674: 99.5245% ( 1) 00:13:47.644 8.777 - 8.838: 99.5305% ( 1) 00:13:47.644 30.476 - 30.598: 99.5365% ( 1) 00:13:47.644 3994.575 - 4025.783: 100.0000% ( 77) 00:13:47.644 00:13:47.644 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:47.644 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:47.644 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:47.644 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:47.644 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:47.930 [ 00:13:47.930 { 00:13:47.930 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.930 "subtype": "Discovery", 00:13:47.930 "listen_addresses": [], 00:13:47.930 "allow_any_host": true, 00:13:47.930 "hosts": [] 00:13:47.930 }, 00:13:47.930 { 00:13:47.930 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:47.930 "subtype": "NVMe", 00:13:47.930 "listen_addresses": [ 00:13:47.930 { 00:13:47.930 "trtype": "VFIOUSER", 00:13:47.930 "adrfam": "IPv4", 00:13:47.930 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:47.930 "trsvcid": "0" 00:13:47.930 } 00:13:47.930 ], 00:13:47.930 "allow_any_host": true, 00:13:47.930 "hosts": [], 00:13:47.930 "serial_number": "SPDK1", 00:13:47.930 "model_number": "SPDK bdev Controller", 00:13:47.930 "max_namespaces": 32, 00:13:47.930 "min_cntlid": 1, 00:13:47.930 "max_cntlid": 65519, 00:13:47.930 "namespaces": [ 00:13:47.930 { 00:13:47.930 "nsid": 1, 00:13:47.930 "bdev_name": "Malloc1", 00:13:47.930 "name": "Malloc1", 00:13:47.930 "nguid": "F8251C6C346046A889C6D561F97EDCEC", 00:13:47.930 "uuid": "f8251c6c-3460-46a8-89c6-d561f97edcec" 00:13:47.930 } 00:13:47.930 ] 00:13:47.930 }, 00:13:47.930 { 00:13:47.930 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:47.930 "subtype": "NVMe", 00:13:47.930 "listen_addresses": [ 00:13:47.930 { 00:13:47.930 "trtype": "VFIOUSER", 00:13:47.930 "adrfam": "IPv4", 00:13:47.930 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:47.930 "trsvcid": "0" 00:13:47.930 } 00:13:47.930 ], 00:13:47.930 "allow_any_host": true, 00:13:47.930 "hosts": [], 00:13:47.930 "serial_number": "SPDK2", 00:13:47.930 "model_number": "SPDK bdev Controller", 00:13:47.930 "max_namespaces": 32, 00:13:47.930 "min_cntlid": 1, 00:13:47.930 "max_cntlid": 65519, 00:13:47.930 "namespaces": [ 00:13:47.930 { 00:13:47.930 "nsid": 1, 00:13:47.930 "bdev_name": "Malloc2", 00:13:47.930 "name": "Malloc2", 00:13:47.930 "nguid": "BB0E14A9D818450BAD924ACF089C22C4", 00:13:47.930 "uuid": "bb0e14a9-d818-450b-ad92-4acf089c22c4" 00:13:47.930 } 00:13:47.930 ] 00:13:47.930 } 00:13:47.930 ] 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1394833 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:47.930 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:48.190 [2024-12-09 15:06:49.782661] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.190 Malloc3 00:13:48.190 15:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:48.449 [2024-12-09 15:06:50.017442] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.449 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:48.449 Asynchronous Event Request test 00:13:48.449 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.449 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.449 Registering asynchronous event callbacks... 00:13:48.449 Starting namespace attribute notice tests for all controllers... 00:13:48.449 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:48.449 aer_cb - Changed Namespace 00:13:48.449 Cleaning up... 00:13:48.449 [ 00:13:48.449 { 00:13:48.449 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.449 "subtype": "Discovery", 00:13:48.449 "listen_addresses": [], 00:13:48.449 "allow_any_host": true, 00:13:48.449 "hosts": [] 00:13:48.449 }, 00:13:48.449 { 00:13:48.449 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.449 "subtype": "NVMe", 00:13:48.449 "listen_addresses": [ 00:13:48.449 { 00:13:48.449 "trtype": "VFIOUSER", 00:13:48.449 "adrfam": "IPv4", 00:13:48.449 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.449 "trsvcid": "0" 00:13:48.449 } 00:13:48.449 ], 00:13:48.449 "allow_any_host": true, 00:13:48.449 "hosts": [], 00:13:48.449 "serial_number": "SPDK1", 00:13:48.449 "model_number": "SPDK bdev Controller", 00:13:48.449 "max_namespaces": 32, 00:13:48.449 "min_cntlid": 1, 00:13:48.449 "max_cntlid": 65519, 00:13:48.449 "namespaces": [ 00:13:48.449 { 00:13:48.449 "nsid": 1, 00:13:48.449 "bdev_name": "Malloc1", 00:13:48.449 "name": "Malloc1", 00:13:48.449 "nguid": "F8251C6C346046A889C6D561F97EDCEC", 00:13:48.449 "uuid": "f8251c6c-3460-46a8-89c6-d561f97edcec" 00:13:48.449 }, 00:13:48.449 { 00:13:48.449 "nsid": 2, 00:13:48.449 "bdev_name": "Malloc3", 00:13:48.449 "name": "Malloc3", 00:13:48.449 "nguid": "DA7D7EF43BA943A9B066577EA7E2A3F9", 00:13:48.449 "uuid": "da7d7ef4-3ba9-43a9-b066-577ea7e2a3f9" 00:13:48.449 } 00:13:48.449 ] 00:13:48.449 }, 00:13:48.449 { 00:13:48.449 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.449 "subtype": "NVMe", 00:13:48.449 "listen_addresses": [ 00:13:48.449 { 00:13:48.449 "trtype": "VFIOUSER", 00:13:48.449 "adrfam": "IPv4", 00:13:48.449 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.449 "trsvcid": "0" 00:13:48.449 } 00:13:48.449 ], 00:13:48.449 "allow_any_host": true, 00:13:48.449 "hosts": [], 00:13:48.449 "serial_number": "SPDK2", 00:13:48.449 "model_number": "SPDK bdev Controller", 00:13:48.449 "max_namespaces": 32, 00:13:48.449 "min_cntlid": 1, 00:13:48.449 "max_cntlid": 65519, 00:13:48.449 "namespaces": [ 00:13:48.449 { 00:13:48.449 "nsid": 1, 00:13:48.449 "bdev_name": "Malloc2", 00:13:48.449 "name": "Malloc2", 00:13:48.449 "nguid": "BB0E14A9D818450BAD924ACF089C22C4", 00:13:48.449 "uuid": "bb0e14a9-d818-450b-ad92-4acf089c22c4" 00:13:48.449 } 00:13:48.449 ] 00:13:48.449 } 00:13:48.449 ] 00:13:48.449 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1394833 00:13:48.449 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:48.449 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:48.449 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:48.450 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:48.711 [2024-12-09 15:06:50.268793] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:13:48.711 [2024-12-09 15:06:50.268835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394851 ] 00:13:48.711 [2024-12-09 15:06:50.309687] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:48.711 [2024-12-09 15:06:50.314421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:48.711 [2024-12-09 15:06:50.314444] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa2ad3b9000 00:13:48.711 [2024-12-09 15:06:50.315424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.316428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.317440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.318443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.319444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.320451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.321456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.322465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:48.711 [2024-12-09 15:06:50.323470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:48.711 [2024-12-09 15:06:50.323481] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa2ad3ae000 00:13:48.711 [2024-12-09 15:06:50.324397] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:48.711 [2024-12-09 15:06:50.338477] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:48.711 [2024-12-09 15:06:50.338504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:48.711 [2024-12-09 15:06:50.340573] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:48.711 [2024-12-09 15:06:50.340609] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:48.711 [2024-12-09 15:06:50.340676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:48.711 [2024-12-09 15:06:50.340692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:48.711 [2024-12-09 15:06:50.340697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:48.711 [2024-12-09 15:06:50.341577] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:48.711 [2024-12-09 15:06:50.341588] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:48.711 [2024-12-09 15:06:50.341594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:48.711 [2024-12-09 15:06:50.342592] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:48.711 [2024-12-09 15:06:50.342601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:48.711 [2024-12-09 15:06:50.342608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:48.711 [2024-12-09 15:06:50.343594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:48.711 [2024-12-09 15:06:50.343602] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:48.711 [2024-12-09 15:06:50.344616] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:48.711 [2024-12-09 15:06:50.344625] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:48.711 [2024-12-09 15:06:50.344629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:48.711 [2024-12-09 15:06:50.344635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:48.711 [2024-12-09 15:06:50.344743] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:48.711 [2024-12-09 15:06:50.344747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:48.711 [2024-12-09 15:06:50.344752] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:48.711 [2024-12-09 15:06:50.345613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:48.711 [2024-12-09 15:06:50.346619] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:48.711 [2024-12-09 15:06:50.347627] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:48.711 [2024-12-09 15:06:50.348632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:48.712 [2024-12-09 15:06:50.348671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:48.712 [2024-12-09 15:06:50.349641] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:48.712 [2024-12-09 15:06:50.349649] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:48.712 [2024-12-09 15:06:50.349653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.349670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:48.712 [2024-12-09 15:06:50.349680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.349695] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.712 [2024-12-09 15:06:50.349700] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.712 [2024-12-09 15:06:50.349705] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.712 [2024-12-09 15:06:50.349716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.360224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.360238] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:48.712 [2024-12-09 15:06:50.360243] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:48.712 [2024-12-09 15:06:50.360246] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:48.712 [2024-12-09 15:06:50.360251] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:48.712 [2024-12-09 15:06:50.360255] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:48.712 [2024-12-09 15:06:50.360259] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:48.712 [2024-12-09 15:06:50.360263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.360270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.360279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.368235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.712 [2024-12-09 15:06:50.368243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.712 [2024-12-09 15:06:50.368250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.712 [2024-12-09 15:06:50.368258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.712 [2024-12-09 15:06:50.368262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.368271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.368279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.376221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.376228] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:48.712 [2024-12-09 15:06:50.376233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.376239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.376244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.376254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.384222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.384278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.384285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.384292] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:48.712 [2024-12-09 15:06:50.384296] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:48.712 [2024-12-09 15:06:50.384300] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.712 [2024-12-09 15:06:50.384305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.392221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.392232] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:48.712 [2024-12-09 15:06:50.392242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.392249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.392255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.712 [2024-12-09 15:06:50.392258] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.712 [2024-12-09 15:06:50.392261] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.712 [2024-12-09 15:06:50.392267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.400222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.400235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.400243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.400249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:48.712 [2024-12-09 15:06:50.400253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.712 [2024-12-09 15:06:50.400256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.712 [2024-12-09 15:06:50.400262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.408223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.408232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408268] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:48.712 [2024-12-09 15:06:50.408272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:48.712 [2024-12-09 15:06:50.408277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:48.712 [2024-12-09 15:06:50.408293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.416221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.416233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.424221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.424232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.432222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.432233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:48.712 [2024-12-09 15:06:50.440224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:48.712 [2024-12-09 15:06:50.440239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:48.712 [2024-12-09 15:06:50.440243] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:48.712 [2024-12-09 15:06:50.440246] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:48.712 [2024-12-09 15:06:50.440250] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:48.712 [2024-12-09 15:06:50.440253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:48.712 [2024-12-09 15:06:50.440259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:48.712 [2024-12-09 15:06:50.440265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:48.712 [2024-12-09 15:06:50.440269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:48.712 [2024-12-09 15:06:50.440272] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.713 [2024-12-09 15:06:50.440277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:48.713 [2024-12-09 15:06:50.440284] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:48.713 [2024-12-09 15:06:50.440287] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:48.713 [2024-12-09 15:06:50.440292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.713 [2024-12-09 15:06:50.440298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:48.713 [2024-12-09 15:06:50.440304] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:48.713 [2024-12-09 15:06:50.440308] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:48.713 [2024-12-09 15:06:50.440311] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:48.713 [2024-12-09 15:06:50.440316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:48.713 [2024-12-09 15:06:50.448223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:48.713 [2024-12-09 15:06:50.448237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:48.713 [2024-12-09 15:06:50.448246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:48.713 [2024-12-09 15:06:50.448253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:48.713 ===================================================== 00:13:48.713 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:48.713 ===================================================== 00:13:48.713 Controller Capabilities/Features 00:13:48.713 ================================ 00:13:48.713 Vendor ID: 4e58 00:13:48.713 Subsystem Vendor ID: 4e58 00:13:48.713 Serial Number: SPDK2 00:13:48.713 Model Number: SPDK bdev Controller 00:13:48.713 Firmware Version: 25.01 00:13:48.713 Recommended Arb Burst: 6 00:13:48.713 IEEE OUI Identifier: 8d 6b 50 00:13:48.713 Multi-path I/O 00:13:48.713 May have multiple subsystem ports: Yes 00:13:48.713 May have multiple controllers: Yes 00:13:48.713 Associated with SR-IOV VF: No 00:13:48.713 Max Data Transfer Size: 131072 00:13:48.713 Max Number of Namespaces: 32 00:13:48.713 Max Number of I/O Queues: 127 00:13:48.713 NVMe Specification Version (VS): 1.3 00:13:48.713 NVMe Specification Version (Identify): 1.3 00:13:48.713 Maximum Queue Entries: 256 00:13:48.713 Contiguous Queues Required: Yes 00:13:48.713 Arbitration Mechanisms Supported 00:13:48.713 Weighted Round Robin: Not Supported 00:13:48.713 Vendor Specific: Not Supported 00:13:48.713 Reset Timeout: 15000 ms 00:13:48.713 Doorbell Stride: 4 bytes 00:13:48.713 NVM Subsystem Reset: Not Supported 00:13:48.713 Command Sets Supported 00:13:48.713 NVM Command Set: Supported 00:13:48.713 Boot Partition: Not Supported 00:13:48.713 Memory Page Size Minimum: 4096 bytes 00:13:48.713 Memory Page Size Maximum: 4096 bytes 00:13:48.713 Persistent Memory Region: Not Supported 00:13:48.713 Optional Asynchronous Events Supported 00:13:48.713 Namespace Attribute Notices: Supported 00:13:48.713 Firmware Activation Notices: Not Supported 00:13:48.713 ANA Change Notices: Not Supported 00:13:48.713 PLE Aggregate Log Change Notices: Not Supported 00:13:48.713 LBA Status Info Alert Notices: Not Supported 00:13:48.713 EGE Aggregate Log Change Notices: Not Supported 00:13:48.713 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.713 Zone Descriptor Change Notices: Not Supported 00:13:48.713 Discovery Log Change Notices: Not Supported 00:13:48.713 Controller Attributes 00:13:48.713 128-bit Host Identifier: Supported 00:13:48.713 Non-Operational Permissive Mode: Not Supported 00:13:48.713 NVM Sets: Not Supported 00:13:48.713 Read Recovery Levels: Not Supported 00:13:48.713 Endurance Groups: Not Supported 00:13:48.713 Predictable Latency Mode: Not Supported 00:13:48.713 Traffic Based Keep ALive: Not Supported 00:13:48.713 Namespace Granularity: Not Supported 00:13:48.713 SQ Associations: Not Supported 00:13:48.713 UUID List: Not Supported 00:13:48.713 Multi-Domain Subsystem: Not Supported 00:13:48.713 Fixed Capacity Management: Not Supported 00:13:48.713 Variable Capacity Management: Not Supported 00:13:48.713 Delete Endurance Group: Not Supported 00:13:48.713 Delete NVM Set: Not Supported 00:13:48.713 Extended LBA Formats Supported: Not Supported 00:13:48.713 Flexible Data Placement Supported: Not Supported 00:13:48.713 00:13:48.713 Controller Memory Buffer Support 00:13:48.713 ================================ 00:13:48.713 Supported: No 00:13:48.713 00:13:48.713 Persistent Memory Region Support 00:13:48.713 ================================ 00:13:48.713 Supported: No 00:13:48.713 00:13:48.713 Admin Command Set Attributes 00:13:48.713 ============================ 00:13:48.713 Security Send/Receive: Not Supported 00:13:48.713 Format NVM: Not Supported 00:13:48.713 Firmware Activate/Download: Not Supported 00:13:48.713 Namespace Management: Not Supported 00:13:48.713 Device Self-Test: Not Supported 00:13:48.713 Directives: Not Supported 00:13:48.713 NVMe-MI: Not Supported 00:13:48.713 Virtualization Management: Not Supported 00:13:48.713 Doorbell Buffer Config: Not Supported 00:13:48.713 Get LBA Status Capability: Not Supported 00:13:48.713 Command & Feature Lockdown Capability: Not Supported 00:13:48.713 Abort Command Limit: 4 00:13:48.713 Async Event Request Limit: 4 00:13:48.713 Number of Firmware Slots: N/A 00:13:48.713 Firmware Slot 1 Read-Only: N/A 00:13:48.713 Firmware Activation Without Reset: N/A 00:13:48.713 Multiple Update Detection Support: N/A 00:13:48.713 Firmware Update Granularity: No Information Provided 00:13:48.713 Per-Namespace SMART Log: No 00:13:48.713 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.713 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:48.713 Command Effects Log Page: Supported 00:13:48.713 Get Log Page Extended Data: Supported 00:13:48.713 Telemetry Log Pages: Not Supported 00:13:48.713 Persistent Event Log Pages: Not Supported 00:13:48.713 Supported Log Pages Log Page: May Support 00:13:48.713 Commands Supported & Effects Log Page: Not Supported 00:13:48.713 Feature Identifiers & Effects Log Page:May Support 00:13:48.713 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.713 Data Area 4 for Telemetry Log: Not Supported 00:13:48.713 Error Log Page Entries Supported: 128 00:13:48.713 Keep Alive: Supported 00:13:48.713 Keep Alive Granularity: 10000 ms 00:13:48.713 00:13:48.713 NVM Command Set Attributes 00:13:48.713 ========================== 00:13:48.713 Submission Queue Entry Size 00:13:48.713 Max: 64 00:13:48.713 Min: 64 00:13:48.713 Completion Queue Entry Size 00:13:48.713 Max: 16 00:13:48.713 Min: 16 00:13:48.713 Number of Namespaces: 32 00:13:48.713 Compare Command: Supported 00:13:48.713 Write Uncorrectable Command: Not Supported 00:13:48.713 Dataset Management Command: Supported 00:13:48.713 Write Zeroes Command: Supported 00:13:48.713 Set Features Save Field: Not Supported 00:13:48.713 Reservations: Not Supported 00:13:48.713 Timestamp: Not Supported 00:13:48.713 Copy: Supported 00:13:48.713 Volatile Write Cache: Present 00:13:48.713 Atomic Write Unit (Normal): 1 00:13:48.713 Atomic Write Unit (PFail): 1 00:13:48.713 Atomic Compare & Write Unit: 1 00:13:48.713 Fused Compare & Write: Supported 00:13:48.713 Scatter-Gather List 00:13:48.713 SGL Command Set: Supported (Dword aligned) 00:13:48.713 SGL Keyed: Not Supported 00:13:48.713 SGL Bit Bucket Descriptor: Not Supported 00:13:48.713 SGL Metadata Pointer: Not Supported 00:13:48.713 Oversized SGL: Not Supported 00:13:48.713 SGL Metadata Address: Not Supported 00:13:48.713 SGL Offset: Not Supported 00:13:48.713 Transport SGL Data Block: Not Supported 00:13:48.713 Replay Protected Memory Block: Not Supported 00:13:48.713 00:13:48.713 Firmware Slot Information 00:13:48.713 ========================= 00:13:48.713 Active slot: 1 00:13:48.713 Slot 1 Firmware Revision: 25.01 00:13:48.713 00:13:48.713 00:13:48.713 Commands Supported and Effects 00:13:48.713 ============================== 00:13:48.713 Admin Commands 00:13:48.713 -------------- 00:13:48.713 Get Log Page (02h): Supported 00:13:48.713 Identify (06h): Supported 00:13:48.713 Abort (08h): Supported 00:13:48.713 Set Features (09h): Supported 00:13:48.713 Get Features (0Ah): Supported 00:13:48.713 Asynchronous Event Request (0Ch): Supported 00:13:48.713 Keep Alive (18h): Supported 00:13:48.713 I/O Commands 00:13:48.713 ------------ 00:13:48.713 Flush (00h): Supported LBA-Change 00:13:48.713 Write (01h): Supported LBA-Change 00:13:48.713 Read (02h): Supported 00:13:48.713 Compare (05h): Supported 00:13:48.713 Write Zeroes (08h): Supported LBA-Change 00:13:48.713 Dataset Management (09h): Supported LBA-Change 00:13:48.713 Copy (19h): Supported LBA-Change 00:13:48.713 00:13:48.713 Error Log 00:13:48.713 ========= 00:13:48.713 00:13:48.713 Arbitration 00:13:48.713 =========== 00:13:48.713 Arbitration Burst: 1 00:13:48.713 00:13:48.714 Power Management 00:13:48.714 ================ 00:13:48.714 Number of Power States: 1 00:13:48.714 Current Power State: Power State #0 00:13:48.714 Power State #0: 00:13:48.714 Max Power: 0.00 W 00:13:48.714 Non-Operational State: Operational 00:13:48.714 Entry Latency: Not Reported 00:13:48.714 Exit Latency: Not Reported 00:13:48.714 Relative Read Throughput: 0 00:13:48.714 Relative Read Latency: 0 00:13:48.714 Relative Write Throughput: 0 00:13:48.714 Relative Write Latency: 0 00:13:48.714 Idle Power: Not Reported 00:13:48.714 Active Power: Not Reported 00:13:48.714 Non-Operational Permissive Mode: Not Supported 00:13:48.714 00:13:48.714 Health Information 00:13:48.714 ================== 00:13:48.714 Critical Warnings: 00:13:48.714 Available Spare Space: OK 00:13:48.714 Temperature: OK 00:13:48.714 Device Reliability: OK 00:13:48.714 Read Only: No 00:13:48.714 Volatile Memory Backup: OK 00:13:48.714 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:48.714 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:48.714 Available Spare: 0% 00:13:48.714 Available Sp[2024-12-09 15:06:50.448341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:48.714 [2024-12-09 15:06:50.456221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:48.714 [2024-12-09 15:06:50.456255] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:48.714 [2024-12-09 15:06:50.456263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.714 [2024-12-09 15:06:50.456269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.714 [2024-12-09 15:06:50.456275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.714 [2024-12-09 15:06:50.456281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.714 [2024-12-09 15:06:50.456339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:48.714 [2024-12-09 15:06:50.456350] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:48.714 [2024-12-09 15:06:50.457345] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:48.714 [2024-12-09 15:06:50.457392] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:48.714 [2024-12-09 15:06:50.457398] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:48.714 [2024-12-09 15:06:50.458342] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:48.714 [2024-12-09 15:06:50.458353] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:48.714 [2024-12-09 15:06:50.458438] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:48.714 [2024-12-09 15:06:50.459394] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:48.714 are Threshold: 0% 00:13:48.714 Life Percentage Used: 0% 00:13:48.714 Data Units Read: 0 00:13:48.714 Data Units Written: 0 00:13:48.714 Host Read Commands: 0 00:13:48.714 Host Write Commands: 0 00:13:48.714 Controller Busy Time: 0 minutes 00:13:48.714 Power Cycles: 0 00:13:48.714 Power On Hours: 0 hours 00:13:48.714 Unsafe Shutdowns: 0 00:13:48.714 Unrecoverable Media Errors: 0 00:13:48.714 Lifetime Error Log Entries: 0 00:13:48.714 Warning Temperature Time: 0 minutes 00:13:48.714 Critical Temperature Time: 0 minutes 00:13:48.714 00:13:48.714 Number of Queues 00:13:48.714 ================ 00:13:48.714 Number of I/O Submission Queues: 127 00:13:48.714 Number of I/O Completion Queues: 127 00:13:48.714 00:13:48.714 Active Namespaces 00:13:48.714 ================= 00:13:48.714 Namespace ID:1 00:13:48.714 Error Recovery Timeout: Unlimited 00:13:48.714 Command Set Identifier: NVM (00h) 00:13:48.714 Deallocate: Supported 00:13:48.714 Deallocated/Unwritten Error: Not Supported 00:13:48.714 Deallocated Read Value: Unknown 00:13:48.714 Deallocate in Write Zeroes: Not Supported 00:13:48.714 Deallocated Guard Field: 0xFFFF 00:13:48.714 Flush: Supported 00:13:48.714 Reservation: Supported 00:13:48.714 Namespace Sharing Capabilities: Multiple Controllers 00:13:48.714 Size (in LBAs): 131072 (0GiB) 00:13:48.714 Capacity (in LBAs): 131072 (0GiB) 00:13:48.714 Utilization (in LBAs): 131072 (0GiB) 00:13:48.714 NGUID: BB0E14A9D818450BAD924ACF089C22C4 00:13:48.714 UUID: bb0e14a9-d818-450b-ad92-4acf089c22c4 00:13:48.714 Thin Provisioning: Not Supported 00:13:48.714 Per-NS Atomic Units: Yes 00:13:48.714 Atomic Boundary Size (Normal): 0 00:13:48.714 Atomic Boundary Size (PFail): 0 00:13:48.714 Atomic Boundary Offset: 0 00:13:48.714 Maximum Single Source Range Length: 65535 00:13:48.714 Maximum Copy Length: 65535 00:13:48.714 Maximum Source Range Count: 1 00:13:48.714 NGUID/EUI64 Never Reused: No 00:13:48.714 Namespace Write Protected: No 00:13:48.714 Number of LBA Formats: 1 00:13:48.714 Current LBA Format: LBA Format #00 00:13:48.714 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.714 00:13:48.714 15:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:48.974 [2024-12-09 15:06:50.687595] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.434 Initializing NVMe Controllers 00:13:54.434 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:54.434 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:54.434 Initialization complete. Launching workers. 00:13:54.434 ======================================================== 00:13:54.434 Latency(us) 00:13:54.434 Device Information : IOPS MiB/s Average min max 00:13:54.434 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.96 156.05 3203.79 961.75 6672.23 00:13:54.434 ======================================================== 00:13:54.434 Total : 39947.96 156.05 3203.79 961.75 6672.23 00:13:54.434 00:13:54.434 [2024-12-09 15:06:55.792489] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.434 15:06:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:54.434 [2024-12-09 15:06:56.023166] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:59.703 Initializing NVMe Controllers 00:13:59.703 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:59.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:59.703 Initialization complete. Launching workers. 00:13:59.703 ======================================================== 00:13:59.703 Latency(us) 00:13:59.703 Device Information : IOPS MiB/s Average min max 00:13:59.704 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39945.06 156.04 3204.23 972.46 9582.82 00:13:59.704 ======================================================== 00:13:59.704 Total : 39945.06 156.04 3204.23 972.46 9582.82 00:13:59.704 00:13:59.704 [2024-12-09 15:07:01.041528] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:59.704 15:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:59.704 [2024-12-09 15:07:01.247775] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.996 [2024-12-09 15:07:06.387408] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.996 Initializing NVMe Controllers 00:14:04.996 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.996 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.996 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:04.996 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:04.996 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:04.996 Initialization complete. Launching workers. 00:14:04.996 Starting thread on core 2 00:14:04.996 Starting thread on core 3 00:14:04.996 Starting thread on core 1 00:14:04.996 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:04.996 [2024-12-09 15:07:06.682695] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.284 [2024-12-09 15:07:09.747432] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.284 Initializing NVMe Controllers 00:14:08.284 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.284 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:08.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:08.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:08.284 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:08.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:08.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:08.284 Initialization complete. Launching workers. 00:14:08.284 Starting thread on core 1 with urgent priority queue 00:14:08.284 Starting thread on core 2 with urgent priority queue 00:14:08.284 Starting thread on core 3 with urgent priority queue 00:14:08.284 Starting thread on core 0 with urgent priority queue 00:14:08.284 SPDK bdev Controller (SPDK2 ) core 0: 5498.67 IO/s 18.19 secs/100000 ios 00:14:08.284 SPDK bdev Controller (SPDK2 ) core 1: 5300.67 IO/s 18.87 secs/100000 ios 00:14:08.284 SPDK bdev Controller (SPDK2 ) core 2: 4763.67 IO/s 20.99 secs/100000 ios 00:14:08.284 SPDK bdev Controller (SPDK2 ) core 3: 5890.00 IO/s 16.98 secs/100000 ios 00:14:08.284 ======================================================== 00:14:08.284 00:14:08.284 15:07:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.284 [2024-12-09 15:07:10.040686] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:08.284 Initializing NVMe Controllers 00:14:08.284 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.284 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:08.284 Namespace ID: 1 size: 0GB 00:14:08.284 Initialization complete. 00:14:08.284 INFO: using host memory buffer for IO 00:14:08.284 Hello world! 00:14:08.284 [2024-12-09 15:07:10.051013] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:08.542 15:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:08.542 [2024-12-09 15:07:10.335612] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.919 Initializing NVMe Controllers 00:14:09.919 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.919 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:09.919 Initialization complete. Launching workers. 00:14:09.919 submit (in ns) avg, min, max = 6265.7, 3138.1, 3998679.0 00:14:09.919 complete (in ns) avg, min, max = 21373.5, 1710.5, 3999320.0 00:14:09.919 00:14:09.919 Submit histogram 00:14:09.919 ================ 00:14:09.919 Range in us Cumulative Count 00:14:09.919 3.124 - 3.139: 0.0061% ( 1) 00:14:09.919 3.139 - 3.154: 0.0303% ( 4) 00:14:09.919 3.154 - 3.170: 0.0727% ( 7) 00:14:09.919 3.170 - 3.185: 0.1272% ( 9) 00:14:09.919 3.185 - 3.200: 0.1878% ( 10) 00:14:09.919 3.200 - 3.215: 0.5088% ( 53) 00:14:09.919 3.215 - 3.230: 1.9746% ( 242) 00:14:09.919 3.230 - 3.246: 4.9425% ( 490) 00:14:09.919 3.246 - 3.261: 8.8734% ( 649) 00:14:09.919 3.261 - 3.276: 13.2162% ( 717) 00:14:09.919 3.276 - 3.291: 19.2489% ( 996) 00:14:09.919 3.291 - 3.307: 25.7177% ( 1068) 00:14:09.919 3.307 - 3.322: 31.3386% ( 928) 00:14:09.919 3.322 - 3.337: 37.0260% ( 939) 00:14:09.919 3.337 - 3.352: 43.2586% ( 1029) 00:14:09.919 3.352 - 3.368: 48.7038% ( 899) 00:14:09.919 3.368 - 3.383: 54.1611% ( 901) 00:14:09.919 3.383 - 3.398: 60.9207% ( 1116) 00:14:09.919 3.398 - 3.413: 66.6323% ( 943) 00:14:09.919 3.413 - 3.429: 71.7202% ( 840) 00:14:09.919 3.429 - 3.444: 76.8686% ( 850) 00:14:09.919 3.444 - 3.459: 80.8237% ( 653) 00:14:09.919 3.459 - 3.474: 83.7977% ( 491) 00:14:09.919 3.474 - 3.490: 85.7965% ( 330) 00:14:09.919 3.490 - 3.505: 87.2138% ( 234) 00:14:09.919 3.505 - 3.520: 88.0133% ( 132) 00:14:09.919 3.520 - 3.535: 88.6130% ( 99) 00:14:09.919 3.535 - 3.550: 89.3337% ( 119) 00:14:09.919 3.550 - 3.566: 90.1090% ( 128) 00:14:09.919 3.566 - 3.581: 90.8722% ( 126) 00:14:09.919 3.581 - 3.596: 91.6051% ( 121) 00:14:09.919 3.596 - 3.611: 92.3743% ( 127) 00:14:09.919 3.611 - 3.627: 93.2647% ( 147) 00:14:09.919 3.627 - 3.642: 94.1490% ( 146) 00:14:09.919 3.642 - 3.657: 95.1060% ( 158) 00:14:09.919 3.657 - 3.672: 96.0509% ( 156) 00:14:09.919 3.672 - 3.688: 96.7474% ( 115) 00:14:09.919 3.688 - 3.703: 97.4924% ( 123) 00:14:09.919 3.703 - 3.718: 98.0618% ( 94) 00:14:09.919 3.718 - 3.733: 98.4615% ( 66) 00:14:09.919 3.733 - 3.749: 98.7765% ( 52) 00:14:09.919 3.749 - 3.764: 99.0491% ( 45) 00:14:09.919 3.764 - 3.779: 99.2126% ( 27) 00:14:09.919 3.779 - 3.794: 99.3701% ( 26) 00:14:09.919 3.794 - 3.810: 99.4973% ( 21) 00:14:09.919 3.810 - 3.825: 99.5336% ( 6) 00:14:09.919 3.825 - 3.840: 99.6002% ( 11) 00:14:09.919 3.840 - 3.855: 99.6366% ( 6) 00:14:09.919 3.855 - 3.870: 99.6487% ( 2) 00:14:09.919 3.901 - 3.931: 99.6548% ( 1) 00:14:09.919 3.992 - 4.023: 99.6608% ( 1) 00:14:09.919 4.023 - 4.053: 99.6669% ( 1) 00:14:09.919 5.090 - 5.120: 99.6729% ( 1) 00:14:09.919 5.150 - 5.181: 99.6790% ( 1) 00:14:09.919 5.242 - 5.272: 99.6911% ( 2) 00:14:09.919 5.303 - 5.333: 99.6972% ( 1) 00:14:09.919 5.486 - 5.516: 99.7093% ( 2) 00:14:09.919 5.547 - 5.577: 99.7214% ( 2) 00:14:09.919 5.577 - 5.608: 99.7274% ( 1) 00:14:09.919 5.730 - 5.760: 99.7456% ( 3) 00:14:09.919 5.851 - 5.882: 99.7517% ( 1) 00:14:09.919 5.912 - 5.943: 99.7638% ( 2) 00:14:09.919 5.943 - 5.973: 99.7698% ( 1) 00:14:09.919 6.004 - 6.034: 99.7759% ( 1) 00:14:09.919 6.095 - 6.126: 99.7880% ( 2) 00:14:09.919 6.217 - 6.248: 99.8001% ( 2) 00:14:09.919 6.248 - 6.278: 99.8062% ( 1) 00:14:09.919 6.309 - 6.339: 99.8183% ( 2) 00:14:09.919 6.339 - 6.370: 99.8243% ( 1) 00:14:09.919 6.430 - 6.461: 99.8304% ( 1) 00:14:09.919 6.491 - 6.522: 99.8365% ( 1) 00:14:09.919 6.613 - 6.644: 99.8425% ( 1) 00:14:09.919 6.644 - 6.674: 99.8486% ( 1) 00:14:09.919 6.674 - 6.705: 99.8607% ( 2) 00:14:09.919 6.735 - 6.766: 99.8728% ( 2) 00:14:09.919 6.827 - 6.857: 99.8789% ( 1) 00:14:09.919 6.979 - 7.010: 99.8849% ( 1) 00:14:09.919 7.040 - 7.070: 99.8910% ( 1) 00:14:09.920 7.192 - 7.223: 99.8970% ( 1) 00:14:09.920 [2024-12-09 15:07:11.436215] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.920 7.253 - 7.284: 99.9031% ( 1) 00:14:09.920 8.107 - 8.168: 99.9091% ( 1) 00:14:09.920 8.229 - 8.290: 99.9152% ( 1) 00:14:09.920 13.714 - 13.775: 99.9213% ( 1) 00:14:09.920 15.848 - 15.970: 99.9273% ( 1) 00:14:09.920 3417.234 - 3432.838: 99.9334% ( 1) 00:14:09.920 3994.575 - 4025.783: 100.0000% ( 11) 00:14:09.920 00:14:09.920 Complete histogram 00:14:09.920 ================== 00:14:09.920 Range in us Cumulative Count 00:14:09.920 1.707 - 1.714: 0.0061% ( 1) 00:14:09.920 1.714 - 1.722: 0.1817% ( 29) 00:14:09.920 1.722 - 1.730: 0.7329% ( 91) 00:14:09.920 1.730 - 1.737: 1.1508% ( 69) 00:14:09.920 1.737 - 1.745: 1.2780% ( 21) 00:14:09.920 1.745 - 1.752: 1.2962% ( 3) 00:14:09.920 1.752 - 1.760: 1.3749% ( 13) 00:14:09.920 1.760 - 1.768: 3.1254% ( 289) 00:14:09.920 1.768 - 1.775: 19.8789% ( 2766) 00:14:09.920 1.775 - 1.783: 52.4591% ( 5379) 00:14:09.920 1.783 - 1.790: 69.3640% ( 2791) 00:14:09.920 1.790 - 1.798: 73.9067% ( 750) 00:14:09.920 1.798 - 1.806: 76.6323% ( 450) 00:14:09.920 1.806 - 1.813: 78.1345% ( 248) 00:14:09.920 1.813 - 1.821: 79.0127% ( 145) 00:14:09.920 1.821 - 1.829: 82.2532% ( 535) 00:14:09.920 1.829 - 1.836: 89.3580% ( 1173) 00:14:09.920 1.836 - 1.844: 94.3792% ( 829) 00:14:09.920 1.844 - 1.851: 96.2932% ( 316) 00:14:09.920 1.851 - 1.859: 97.6075% ( 217) 00:14:09.920 1.859 - 1.867: 98.4858% ( 145) 00:14:09.920 1.867 - 1.874: 98.7886% ( 50) 00:14:09.920 1.874 - 1.882: 98.9219% ( 22) 00:14:09.920 1.882 - 1.890: 98.9945% ( 12) 00:14:09.920 1.890 - 1.897: 99.0793% ( 14) 00:14:09.920 1.897 - 1.905: 99.1763% ( 16) 00:14:09.920 1.905 - 1.912: 99.2368% ( 10) 00:14:09.920 1.912 - 1.920: 99.2792% ( 7) 00:14:09.920 1.920 - 1.928: 99.3095% ( 5) 00:14:09.920 1.928 - 1.935: 99.3156% ( 1) 00:14:09.920 1.943 - 1.950: 99.3216% ( 1) 00:14:09.920 1.950 - 1.966: 99.3398% ( 3) 00:14:09.920 1.966 - 1.981: 99.3459% ( 1) 00:14:09.920 1.996 - 2.011: 99.3519% ( 1) 00:14:09.920 2.027 - 2.042: 99.3580% ( 1) 00:14:09.920 2.042 - 2.057: 99.3640% ( 1) 00:14:09.920 2.194 - 2.210: 99.3701% ( 1) 00:14:09.920 3.642 - 3.657: 99.3761% ( 1) 00:14:09.920 3.779 - 3.794: 99.3822% ( 1) 00:14:09.920 3.962 - 3.992: 99.3882% ( 1) 00:14:09.920 4.053 - 4.084: 99.3943% ( 1) 00:14:09.920 4.175 - 4.206: 99.4004% ( 1) 00:14:09.920 4.206 - 4.236: 99.4125% ( 2) 00:14:09.920 4.236 - 4.267: 99.4246% ( 2) 00:14:09.920 4.297 - 4.328: 99.4306% ( 1) 00:14:09.920 4.541 - 4.571: 99.4367% ( 1) 00:14:09.920 4.602 - 4.632: 99.4428% ( 1) 00:14:09.920 4.663 - 4.693: 99.4488% ( 1) 00:14:09.920 4.693 - 4.724: 99.4549% ( 1) 00:14:09.920 4.754 - 4.785: 99.4609% ( 1) 00:14:09.920 4.785 - 4.815: 99.4670% ( 1) 00:14:09.920 4.846 - 4.876: 99.4730% ( 1) 00:14:09.920 4.876 - 4.907: 99.4791% ( 1) 00:14:09.920 5.090 - 5.120: 99.4852% ( 1) 00:14:09.920 6.034 - 6.065: 99.4912% ( 1) 00:14:09.920 6.857 - 6.888: 99.4973% ( 1) 00:14:09.920 12.556 - 12.617: 99.5033% ( 1) 00:14:09.920 39.497 - 39.741: 99.5094% ( 1) 00:14:09.920 3651.291 - 3666.895: 99.5154% ( 1) 00:14:09.920 3978.971 - 3994.575: 99.5215% ( 1) 00:14:09.920 3994.575 - 4025.783: 100.0000% ( 79) 00:14:09.920 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:09.920 [ 00:14:09.920 { 00:14:09.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:09.920 "subtype": "Discovery", 00:14:09.920 "listen_addresses": [], 00:14:09.920 "allow_any_host": true, 00:14:09.920 "hosts": [] 00:14:09.920 }, 00:14:09.920 { 00:14:09.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:09.920 "subtype": "NVMe", 00:14:09.920 "listen_addresses": [ 00:14:09.920 { 00:14:09.920 "trtype": "VFIOUSER", 00:14:09.920 "adrfam": "IPv4", 00:14:09.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:09.920 "trsvcid": "0" 00:14:09.920 } 00:14:09.920 ], 00:14:09.920 "allow_any_host": true, 00:14:09.920 "hosts": [], 00:14:09.920 "serial_number": "SPDK1", 00:14:09.920 "model_number": "SPDK bdev Controller", 00:14:09.920 "max_namespaces": 32, 00:14:09.920 "min_cntlid": 1, 00:14:09.920 "max_cntlid": 65519, 00:14:09.920 "namespaces": [ 00:14:09.920 { 00:14:09.920 "nsid": 1, 00:14:09.920 "bdev_name": "Malloc1", 00:14:09.920 "name": "Malloc1", 00:14:09.920 "nguid": "F8251C6C346046A889C6D561F97EDCEC", 00:14:09.920 "uuid": "f8251c6c-3460-46a8-89c6-d561f97edcec" 00:14:09.920 }, 00:14:09.920 { 00:14:09.920 "nsid": 2, 00:14:09.920 "bdev_name": "Malloc3", 00:14:09.920 "name": "Malloc3", 00:14:09.920 "nguid": "DA7D7EF43BA943A9B066577EA7E2A3F9", 00:14:09.920 "uuid": "da7d7ef4-3ba9-43a9-b066-577ea7e2a3f9" 00:14:09.920 } 00:14:09.920 ] 00:14:09.920 }, 00:14:09.920 { 00:14:09.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:09.920 "subtype": "NVMe", 00:14:09.920 "listen_addresses": [ 00:14:09.920 { 00:14:09.920 "trtype": "VFIOUSER", 00:14:09.920 "adrfam": "IPv4", 00:14:09.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:09.920 "trsvcid": "0" 00:14:09.920 } 00:14:09.920 ], 00:14:09.920 "allow_any_host": true, 00:14:09.920 "hosts": [], 00:14:09.920 "serial_number": "SPDK2", 00:14:09.920 "model_number": "SPDK bdev Controller", 00:14:09.920 "max_namespaces": 32, 00:14:09.920 "min_cntlid": 1, 00:14:09.920 "max_cntlid": 65519, 00:14:09.920 "namespaces": [ 00:14:09.920 { 00:14:09.920 "nsid": 1, 00:14:09.920 "bdev_name": "Malloc2", 00:14:09.920 "name": "Malloc2", 00:14:09.920 "nguid": "BB0E14A9D818450BAD924ACF089C22C4", 00:14:09.920 "uuid": "bb0e14a9-d818-450b-ad92-4acf089c22c4" 00:14:09.920 } 00:14:09.920 ] 00:14:09.920 } 00:14:09.920 ] 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1398418 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:09.920 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:10.178 [2024-12-09 15:07:11.850396] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.178 Malloc4 00:14:10.178 15:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:10.436 [2024-12-09 15:07:12.103413] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.436 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:10.436 Asynchronous Event Request test 00:14:10.436 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.436 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.436 Registering asynchronous event callbacks... 00:14:10.436 Starting namespace attribute notice tests for all controllers... 00:14:10.436 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:10.436 aer_cb - Changed Namespace 00:14:10.436 Cleaning up... 00:14:10.695 [ 00:14:10.695 { 00:14:10.695 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:10.695 "subtype": "Discovery", 00:14:10.695 "listen_addresses": [], 00:14:10.695 "allow_any_host": true, 00:14:10.695 "hosts": [] 00:14:10.695 }, 00:14:10.695 { 00:14:10.695 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:10.695 "subtype": "NVMe", 00:14:10.695 "listen_addresses": [ 00:14:10.695 { 00:14:10.695 "trtype": "VFIOUSER", 00:14:10.695 "adrfam": "IPv4", 00:14:10.695 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:10.695 "trsvcid": "0" 00:14:10.695 } 00:14:10.695 ], 00:14:10.695 "allow_any_host": true, 00:14:10.695 "hosts": [], 00:14:10.695 "serial_number": "SPDK1", 00:14:10.695 "model_number": "SPDK bdev Controller", 00:14:10.695 "max_namespaces": 32, 00:14:10.695 "min_cntlid": 1, 00:14:10.695 "max_cntlid": 65519, 00:14:10.695 "namespaces": [ 00:14:10.695 { 00:14:10.695 "nsid": 1, 00:14:10.695 "bdev_name": "Malloc1", 00:14:10.695 "name": "Malloc1", 00:14:10.695 "nguid": "F8251C6C346046A889C6D561F97EDCEC", 00:14:10.695 "uuid": "f8251c6c-3460-46a8-89c6-d561f97edcec" 00:14:10.695 }, 00:14:10.695 { 00:14:10.695 "nsid": 2, 00:14:10.695 "bdev_name": "Malloc3", 00:14:10.695 "name": "Malloc3", 00:14:10.695 "nguid": "DA7D7EF43BA943A9B066577EA7E2A3F9", 00:14:10.695 "uuid": "da7d7ef4-3ba9-43a9-b066-577ea7e2a3f9" 00:14:10.695 } 00:14:10.695 ] 00:14:10.695 }, 00:14:10.695 { 00:14:10.695 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:10.695 "subtype": "NVMe", 00:14:10.695 "listen_addresses": [ 00:14:10.695 { 00:14:10.695 "trtype": "VFIOUSER", 00:14:10.695 "adrfam": "IPv4", 00:14:10.695 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:10.695 "trsvcid": "0" 00:14:10.695 } 00:14:10.695 ], 00:14:10.695 "allow_any_host": true, 00:14:10.695 "hosts": [], 00:14:10.695 "serial_number": "SPDK2", 00:14:10.695 "model_number": "SPDK bdev Controller", 00:14:10.695 "max_namespaces": 32, 00:14:10.695 "min_cntlid": 1, 00:14:10.695 "max_cntlid": 65519, 00:14:10.695 "namespaces": [ 00:14:10.695 { 00:14:10.695 "nsid": 1, 00:14:10.695 "bdev_name": "Malloc2", 00:14:10.695 "name": "Malloc2", 00:14:10.695 "nguid": "BB0E14A9D818450BAD924ACF089C22C4", 00:14:10.695 "uuid": "bb0e14a9-d818-450b-ad92-4acf089c22c4" 00:14:10.695 }, 00:14:10.695 { 00:14:10.695 "nsid": 2, 00:14:10.695 "bdev_name": "Malloc4", 00:14:10.695 "name": "Malloc4", 00:14:10.695 "nguid": "DD099515B88849F59691670E24DEF592", 00:14:10.695 "uuid": "dd099515-b888-49f5-9691-670e24def592" 00:14:10.695 } 00:14:10.695 ] 00:14:10.695 } 00:14:10.695 ] 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1398418 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1390744 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1390744 ']' 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1390744 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1390744 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1390744' 00:14:10.695 killing process with pid 1390744 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1390744 00:14:10.695 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1390744 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1398503 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1398503' 00:14:10.954 Process pid: 1398503 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1398503 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1398503 ']' 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.954 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.954 [2024-12-09 15:07:12.662363] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:10.954 [2024-12-09 15:07:12.663210] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:10.954 [2024-12-09 15:07:12.663250] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.954 [2024-12-09 15:07:12.723996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.213 [2024-12-09 15:07:12.765802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.213 [2024-12-09 15:07:12.765835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.213 [2024-12-09 15:07:12.765842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.213 [2024-12-09 15:07:12.765848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.213 [2024-12-09 15:07:12.765853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.213 [2024-12-09 15:07:12.767204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.213 [2024-12-09 15:07:12.767236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.213 [2024-12-09 15:07:12.767345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.213 [2024-12-09 15:07:12.767345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.213 [2024-12-09 15:07:12.834373] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:11.213 [2024-12-09 15:07:12.834836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:11.213 [2024-12-09 15:07:12.834924] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:11.213 [2024-12-09 15:07:12.835300] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:11.213 [2024-12-09 15:07:12.835309] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:11.213 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.213 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:11.213 15:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:12.150 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:12.409 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:12.409 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:12.409 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.409 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:12.409 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.668 Malloc1 00:14:12.668 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:12.927 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.927 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:13.186 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.186 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:13.186 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:13.444 Malloc2 00:14:13.444 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:13.702 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:13.961 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1398503 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1398503 ']' 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1398503 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398503 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398503' 00:14:14.220 killing process with pid 1398503 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1398503 00:14:14.220 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1398503 00:14:14.220 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:14.220 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:14.220 00:14:14.220 real 0m50.860s 00:14:14.220 user 3m16.821s 00:14:14.220 sys 0m3.273s 00:14:14.220 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.220 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:14.220 ************************************ 00:14:14.220 END TEST nvmf_vfio_user 00:14:14.220 ************************************ 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.480 ************************************ 00:14:14.480 START TEST nvmf_vfio_user_nvme_compliance 00:14:14.480 ************************************ 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:14.480 * Looking for test storage... 00:14:14.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.480 --rc genhtml_branch_coverage=1 00:14:14.480 --rc genhtml_function_coverage=1 00:14:14.480 --rc genhtml_legend=1 00:14:14.480 --rc geninfo_all_blocks=1 00:14:14.480 --rc geninfo_unexecuted_blocks=1 00:14:14.480 00:14:14.480 ' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.480 --rc genhtml_branch_coverage=1 00:14:14.480 --rc genhtml_function_coverage=1 00:14:14.480 --rc genhtml_legend=1 00:14:14.480 --rc geninfo_all_blocks=1 00:14:14.480 --rc geninfo_unexecuted_blocks=1 00:14:14.480 00:14:14.480 ' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.480 --rc genhtml_branch_coverage=1 00:14:14.480 --rc genhtml_function_coverage=1 00:14:14.480 --rc genhtml_legend=1 00:14:14.480 --rc geninfo_all_blocks=1 00:14:14.480 --rc geninfo_unexecuted_blocks=1 00:14:14.480 00:14:14.480 ' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:14.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.480 --rc genhtml_branch_coverage=1 00:14:14.480 --rc genhtml_function_coverage=1 00:14:14.480 --rc genhtml_legend=1 00:14:14.480 --rc geninfo_all_blocks=1 00:14:14.480 --rc geninfo_unexecuted_blocks=1 00:14:14.480 00:14:14.480 ' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.480 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1399250 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1399250' 00:14:14.740 Process pid: 1399250 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1399250 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1399250 ']' 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.740 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:14.740 [2024-12-09 15:07:16.335534] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:14:14.740 [2024-12-09 15:07:16.335581] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.740 [2024-12-09 15:07:16.407879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.740 [2024-12-09 15:07:16.446503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.740 [2024-12-09 15:07:16.446539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.740 [2024-12-09 15:07:16.446546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.740 [2024-12-09 15:07:16.446551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.740 [2024-12-09 15:07:16.446556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.740 [2024-12-09 15:07:16.447915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.740 [2024-12-09 15:07:16.448023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.740 [2024-12-09 15:07:16.448024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.999 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.999 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:14.999 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 malloc0 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 15:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:16.194 00:14:16.194 00:14:16.194 CUnit - A unit testing framework for C - Version 2.1-3 00:14:16.194 http://cunit.sourceforge.net/ 00:14:16.194 00:14:16.194 00:14:16.194 Suite: nvme_compliance 00:14:16.194 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 15:07:17.796652] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.194 [2024-12-09 15:07:17.797987] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:16.194 [2024-12-09 15:07:17.798001] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:16.194 [2024-12-09 15:07:17.798007] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:16.194 [2024-12-09 15:07:17.799675] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.194 passed 00:14:16.194 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 15:07:17.879251] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.194 [2024-12-09 15:07:17.885306] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.194 passed 00:14:16.194 Test: admin_identify_ns ...[2024-12-09 15:07:17.960905] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.453 [2024-12-09 15:07:18.020233] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:16.453 [2024-12-09 15:07:18.028236] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:16.453 [2024-12-09 15:07:18.049338] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.453 passed 00:14:16.453 Test: admin_get_features_mandatory_features ...[2024-12-09 15:07:18.125924] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.453 [2024-12-09 15:07:18.128946] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.453 passed 00:14:16.453 Test: admin_get_features_optional_features ...[2024-12-09 15:07:18.208503] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.453 [2024-12-09 15:07:18.211526] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.453 passed 00:14:16.711 Test: admin_set_features_number_of_queues ...[2024-12-09 15:07:18.290256] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.712 [2024-12-09 15:07:18.396320] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.712 passed 00:14:16.712 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 15:07:18.471883] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.712 [2024-12-09 15:07:18.474901] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.970 passed 00:14:16.970 Test: admin_get_log_page_with_lpo ...[2024-12-09 15:07:18.554604] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.970 [2024-12-09 15:07:18.623227] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:16.970 [2024-12-09 15:07:18.636275] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.970 passed 00:14:16.970 Test: fabric_property_get ...[2024-12-09 15:07:18.716050] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.970 [2024-12-09 15:07:18.717298] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:16.970 [2024-12-09 15:07:18.719070] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.970 passed 00:14:17.229 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 15:07:18.794613] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.229 [2024-12-09 15:07:18.795844] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:17.229 [2024-12-09 15:07:18.798639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.229 passed 00:14:17.229 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 15:07:18.877383] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.229 [2024-12-09 15:07:18.962225] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:17.229 [2024-12-09 15:07:18.978224] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:17.229 [2024-12-09 15:07:18.983312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.229 passed 00:14:17.488 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 15:07:19.056991] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.488 [2024-12-09 15:07:19.058234] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:17.488 [2024-12-09 15:07:19.062016] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.488 passed 00:14:17.488 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 15:07:19.137733] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.488 [2024-12-09 15:07:19.216227] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:17.488 [2024-12-09 15:07:19.240224] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:17.488 [2024-12-09 15:07:19.245307] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.488 passed 00:14:17.756 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 15:07:19.321075] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.756 [2024-12-09 15:07:19.322309] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:17.756 [2024-12-09 15:07:19.322333] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:17.756 [2024-12-09 15:07:19.324099] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.756 passed 00:14:17.756 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 15:07:19.398775] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.756 [2024-12-09 15:07:19.494224] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:17.756 [2024-12-09 15:07:19.502228] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:17.756 [2024-12-09 15:07:19.510226] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:17.756 [2024-12-09 15:07:19.518225] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:17.756 [2024-12-09 15:07:19.547307] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.015 passed 00:14:18.015 Test: admin_create_io_sq_verify_pc ...[2024-12-09 15:07:19.620848] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.015 [2024-12-09 15:07:19.640234] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:18.015 [2024-12-09 15:07:19.658054] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.015 passed 00:14:18.015 Test: admin_create_io_qp_max_qps ...[2024-12-09 15:07:19.733571] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.390 [2024-12-09 15:07:20.850228] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:19.649 [2024-12-09 15:07:21.233887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.649 passed 00:14:19.649 Test: admin_create_io_sq_shared_cq ...[2024-12-09 15:07:21.310857] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.649 [2024-12-09 15:07:21.443233] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.908 [2024-12-09 15:07:21.480295] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.908 passed 00:14:19.908 00:14:19.908 Run Summary: Type Total Ran Passed Failed Inactive 00:14:19.908 suites 1 1 n/a 0 0 00:14:19.908 tests 18 18 18 0 0 00:14:19.908 asserts 360 360 360 0 n/a 00:14:19.908 00:14:19.908 Elapsed time = 1.513 seconds 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1399250 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1399250 ']' 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1399250 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399250 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399250' 00:14:19.908 killing process with pid 1399250 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1399250 00:14:19.908 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1399250 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:20.167 00:14:20.167 real 0m5.673s 00:14:20.167 user 0m15.914s 00:14:20.167 sys 0m0.462s 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:20.167 ************************************ 00:14:20.167 END TEST nvmf_vfio_user_nvme_compliance 00:14:20.167 ************************************ 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.167 ************************************ 00:14:20.167 START TEST nvmf_vfio_user_fuzz 00:14:20.167 ************************************ 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:20.167 * Looking for test storage... 00:14:20.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:20.167 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.427 --rc genhtml_branch_coverage=1 00:14:20.427 --rc genhtml_function_coverage=1 00:14:20.427 --rc genhtml_legend=1 00:14:20.427 --rc geninfo_all_blocks=1 00:14:20.427 --rc geninfo_unexecuted_blocks=1 00:14:20.427 00:14:20.427 ' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.427 --rc genhtml_branch_coverage=1 00:14:20.427 --rc genhtml_function_coverage=1 00:14:20.427 --rc genhtml_legend=1 00:14:20.427 --rc geninfo_all_blocks=1 00:14:20.427 --rc geninfo_unexecuted_blocks=1 00:14:20.427 00:14:20.427 ' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.427 --rc genhtml_branch_coverage=1 00:14:20.427 --rc genhtml_function_coverage=1 00:14:20.427 --rc genhtml_legend=1 00:14:20.427 --rc geninfo_all_blocks=1 00:14:20.427 --rc geninfo_unexecuted_blocks=1 00:14:20.427 00:14:20.427 ' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:20.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.427 --rc genhtml_branch_coverage=1 00:14:20.427 --rc genhtml_function_coverage=1 00:14:20.427 --rc genhtml_legend=1 00:14:20.427 --rc geninfo_all_blocks=1 00:14:20.427 --rc geninfo_unexecuted_blocks=1 00:14:20.427 00:14:20.427 ' 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.427 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.427 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1400243 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1400243' 00:14:20.428 Process pid: 1400243 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1400243 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1400243 ']' 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.428 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:20.687 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.687 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:20.687 15:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 malloc0 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:21.622 15:07:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:53.694 Fuzzing completed. Shutting down the fuzz application 00:14:53.694 00:14:53.694 Dumping successful admin opcodes: 00:14:53.694 9, 10, 00:14:53.694 Dumping successful io opcodes: 00:14:53.694 0, 00:14:53.694 NS: 0x20000081ef00 I/O qp, Total commands completed: 998688, total successful commands: 3910, random_seed: 2610162240 00:14:53.694 NS: 0x20000081ef00 admin qp, Total commands completed: 243408, total successful commands: 57, random_seed: 2546507776 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1400243 ']' 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400243' 00:14:53.694 killing process with pid 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1400243 00:14:53.694 15:07:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:53.694 00:14:53.694 real 0m32.213s 00:14:53.694 user 0m30.309s 00:14:53.694 sys 0m30.313s 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 ************************************ 00:14:53.694 END TEST nvmf_vfio_user_fuzz 00:14:53.694 ************************************ 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.694 ************************************ 00:14:53.694 START TEST nvmf_auth_target 00:14:53.694 ************************************ 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:53.694 * Looking for test storage... 00:14:53.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:53.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.694 --rc genhtml_branch_coverage=1 00:14:53.694 --rc genhtml_function_coverage=1 00:14:53.694 --rc genhtml_legend=1 00:14:53.694 --rc geninfo_all_blocks=1 00:14:53.694 --rc geninfo_unexecuted_blocks=1 00:14:53.694 00:14:53.694 ' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:53.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.694 --rc genhtml_branch_coverage=1 00:14:53.694 --rc genhtml_function_coverage=1 00:14:53.694 --rc genhtml_legend=1 00:14:53.694 --rc geninfo_all_blocks=1 00:14:53.694 --rc geninfo_unexecuted_blocks=1 00:14:53.694 00:14:53.694 ' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:53.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.694 --rc genhtml_branch_coverage=1 00:14:53.694 --rc genhtml_function_coverage=1 00:14:53.694 --rc genhtml_legend=1 00:14:53.694 --rc geninfo_all_blocks=1 00:14:53.694 --rc geninfo_unexecuted_blocks=1 00:14:53.694 00:14:53.694 ' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:53.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.694 --rc genhtml_branch_coverage=1 00:14:53.694 --rc genhtml_function_coverage=1 00:14:53.694 --rc genhtml_legend=1 00:14:53.694 --rc geninfo_all_blocks=1 00:14:53.694 --rc geninfo_unexecuted_blocks=1 00:14:53.694 00:14:53.694 ' 00:14:53.694 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:53.695 15:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:58.971 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:58.971 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.971 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:58.971 Found net devices under 0000:af:00.0: cvl_0_0 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:58.971 Found net devices under 0000:af:00.1: cvl_0_1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.971 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:14:58.972 00:14:58.972 --- 10.0.0.2 ping statistics --- 00:14:58.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.972 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:58.972 00:14:58.972 --- 10.0.0.1 ping statistics --- 00:14:58.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.972 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1408457 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1408457 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1408457 ']' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1408626 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=00f6665e06c48f3f376a560b3067451b62f08e8b21d57976 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0Kv 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 00f6665e06c48f3f376a560b3067451b62f08e8b21d57976 0 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 00f6665e06c48f3f376a560b3067451b62f08e8b21d57976 0 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=00f6665e06c48f3f376a560b3067451b62f08e8b21d57976 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0Kv 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0Kv 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.0Kv 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5d3b97144b9757aaefc9b6172820ad3fd448237fdcb83871067b2c850a0333c1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DaT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5d3b97144b9757aaefc9b6172820ad3fd448237fdcb83871067b2c850a0333c1 3 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5d3b97144b9757aaefc9b6172820ad3fd448237fdcb83871067b2c850a0333c1 3 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5d3b97144b9757aaefc9b6172820ad3fd448237fdcb83871067b2c850a0333c1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DaT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DaT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.DaT 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=12c2265f8e69163a4d9d1e71378b3a0a 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.F9M 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 12c2265f8e69163a4d9d1e71378b3a0a 1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 12c2265f8e69163a4d9d1e71378b3a0a 1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=12c2265f8e69163a4d9d1e71378b3a0a 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.F9M 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.F9M 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.F9M 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:58.972 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f5ac7753476d3c12836daead16f0296d5964f2520cdd68b1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fOi 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f5ac7753476d3c12836daead16f0296d5964f2520cdd68b1 2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f5ac7753476d3c12836daead16f0296d5964f2520cdd68b1 2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f5ac7753476d3c12836daead16f0296d5964f2520cdd68b1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fOi 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fOi 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.fOi 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=545815023edb8bb112f23802763e313cd6459d6ae9fdf931 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2dv 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 545815023edb8bb112f23802763e313cd6459d6ae9fdf931 2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 545815023edb8bb112f23802763e313cd6459d6ae9fdf931 2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=545815023edb8bb112f23802763e313cd6459d6ae9fdf931 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2dv 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2dv 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.2dv 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2268d9e4b3cd053482cd8ab81521eeb8 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.n0n 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2268d9e4b3cd053482cd8ab81521eeb8 1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2268d9e4b3cd053482cd8ab81521eeb8 1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2268d9e4b3cd053482cd8ab81521eeb8 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.n0n 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.n0n 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.n0n 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2416f81a3ef894ea0d719b42d67faa7dee10ee12273e55c87184cc5c2cf23960 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Gaw 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2416f81a3ef894ea0d719b42d67faa7dee10ee12273e55c87184cc5c2cf23960 3 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2416f81a3ef894ea0d719b42d67faa7dee10ee12273e55c87184cc5c2cf23960 3 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2416f81a3ef894ea0d719b42d67faa7dee10ee12273e55c87184cc5c2cf23960 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:59.231 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Gaw 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Gaw 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Gaw 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1408457 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1408457 ']' 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.231 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.232 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1408626 /var/tmp/host.sock 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1408626 ']' 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:59.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.489 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Kv 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.0Kv 00:14:59.749 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.0Kv 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.DaT ]] 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DaT 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DaT 00:15:00.051 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DaT 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.F9M 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.F9M 00:15:00.351 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.F9M 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.fOi ]] 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fOi 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fOi 00:15:00.351 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fOi 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2dv 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2dv 00:15:00.609 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2dv 00:15:00.867 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.n0n ]] 00:15:00.867 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0n 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0n 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0n 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gaw 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Gaw 00:15:00.868 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Gaw 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.126 15:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.385 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.643 00:15:01.643 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.643 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.643 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.902 { 00:15:01.902 "cntlid": 1, 00:15:01.902 "qid": 0, 00:15:01.902 "state": "enabled", 00:15:01.902 "thread": "nvmf_tgt_poll_group_000", 00:15:01.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:01.902 "listen_address": { 00:15:01.902 "trtype": "TCP", 00:15:01.902 "adrfam": "IPv4", 00:15:01.902 "traddr": "10.0.0.2", 00:15:01.902 "trsvcid": "4420" 00:15:01.902 }, 00:15:01.902 "peer_address": { 00:15:01.902 "trtype": "TCP", 00:15:01.902 "adrfam": "IPv4", 00:15:01.902 "traddr": "10.0.0.1", 00:15:01.902 "trsvcid": "52414" 00:15:01.902 }, 00:15:01.902 "auth": { 00:15:01.902 "state": "completed", 00:15:01.902 "digest": "sha256", 00:15:01.902 "dhgroup": "null" 00:15:01.902 } 00:15:01.902 } 00:15:01.902 ]' 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.902 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.160 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:02.161 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.729 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.988 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.247 00:15:03.247 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.247 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.247 15:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.505 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.505 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.505 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.506 { 00:15:03.506 "cntlid": 3, 00:15:03.506 "qid": 0, 00:15:03.506 "state": "enabled", 00:15:03.506 "thread": "nvmf_tgt_poll_group_000", 00:15:03.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:03.506 "listen_address": { 00:15:03.506 "trtype": "TCP", 00:15:03.506 "adrfam": "IPv4", 00:15:03.506 "traddr": "10.0.0.2", 00:15:03.506 "trsvcid": "4420" 00:15:03.506 }, 00:15:03.506 "peer_address": { 00:15:03.506 "trtype": "TCP", 00:15:03.506 "adrfam": "IPv4", 00:15:03.506 "traddr": "10.0.0.1", 00:15:03.506 "trsvcid": "46626" 00:15:03.506 }, 00:15:03.506 "auth": { 00:15:03.506 "state": "completed", 00:15:03.506 "digest": "sha256", 00:15:03.506 "dhgroup": "null" 00:15:03.506 } 00:15:03.506 } 00:15:03.506 ]' 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.506 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.764 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:03.764 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.331 15:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.589 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.590 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.590 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.590 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.590 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.590 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.848 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.848 { 00:15:04.848 "cntlid": 5, 00:15:04.848 "qid": 0, 00:15:04.848 "state": "enabled", 00:15:04.848 "thread": "nvmf_tgt_poll_group_000", 00:15:04.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:04.848 "listen_address": { 00:15:04.848 "trtype": "TCP", 00:15:04.848 "adrfam": "IPv4", 00:15:04.848 "traddr": "10.0.0.2", 00:15:04.848 "trsvcid": "4420" 00:15:04.848 }, 00:15:04.848 "peer_address": { 00:15:04.848 "trtype": "TCP", 00:15:04.848 "adrfam": "IPv4", 00:15:04.848 "traddr": "10.0.0.1", 00:15:04.848 "trsvcid": "46658" 00:15:04.848 }, 00:15:04.848 "auth": { 00:15:04.848 "state": "completed", 00:15:04.848 "digest": "sha256", 00:15:04.848 "dhgroup": "null" 00:15:04.848 } 00:15:04.848 } 00:15:04.848 ]' 00:15:04.848 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.106 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.364 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:05.364 15:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.930 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.188 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.188 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.188 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.188 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.188 00:15:06.446 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.446 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.446 15:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.446 { 00:15:06.446 "cntlid": 7, 00:15:06.446 "qid": 0, 00:15:06.446 "state": "enabled", 00:15:06.446 "thread": "nvmf_tgt_poll_group_000", 00:15:06.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:06.446 "listen_address": { 00:15:06.446 "trtype": "TCP", 00:15:06.446 "adrfam": "IPv4", 00:15:06.446 "traddr": "10.0.0.2", 00:15:06.446 "trsvcid": "4420" 00:15:06.446 }, 00:15:06.446 "peer_address": { 00:15:06.446 "trtype": "TCP", 00:15:06.446 "adrfam": "IPv4", 00:15:06.446 "traddr": "10.0.0.1", 00:15:06.446 "trsvcid": "46684" 00:15:06.446 }, 00:15:06.446 "auth": { 00:15:06.446 "state": "completed", 00:15:06.446 "digest": "sha256", 00:15:06.446 "dhgroup": "null" 00:15:06.446 } 00:15:06.446 } 00:15:06.446 ]' 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.446 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.705 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.705 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.705 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.705 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.705 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.963 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:06.963 15:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.531 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.789 00:15:07.789 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.789 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.789 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.047 { 00:15:08.047 "cntlid": 9, 00:15:08.047 "qid": 0, 00:15:08.047 "state": "enabled", 00:15:08.047 "thread": "nvmf_tgt_poll_group_000", 00:15:08.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:08.047 "listen_address": { 00:15:08.047 "trtype": "TCP", 00:15:08.047 "adrfam": "IPv4", 00:15:08.047 "traddr": "10.0.0.2", 00:15:08.047 "trsvcid": "4420" 00:15:08.047 }, 00:15:08.047 "peer_address": { 00:15:08.047 "trtype": "TCP", 00:15:08.047 "adrfam": "IPv4", 00:15:08.047 "traddr": "10.0.0.1", 00:15:08.047 "trsvcid": "46704" 00:15:08.047 }, 00:15:08.047 "auth": { 00:15:08.047 "state": "completed", 00:15:08.047 "digest": "sha256", 00:15:08.047 "dhgroup": "ffdhe2048" 00:15:08.047 } 00:15:08.047 } 00:15:08.047 ]' 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.047 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.306 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.306 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.306 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.306 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.306 15:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.306 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:08.306 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.241 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.500 00:15:09.500 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.500 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.500 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.758 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.758 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.759 { 00:15:09.759 "cntlid": 11, 00:15:09.759 "qid": 0, 00:15:09.759 "state": "enabled", 00:15:09.759 "thread": "nvmf_tgt_poll_group_000", 00:15:09.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:09.759 "listen_address": { 00:15:09.759 "trtype": "TCP", 00:15:09.759 "adrfam": "IPv4", 00:15:09.759 "traddr": "10.0.0.2", 00:15:09.759 "trsvcid": "4420" 00:15:09.759 }, 00:15:09.759 "peer_address": { 00:15:09.759 "trtype": "TCP", 00:15:09.759 "adrfam": "IPv4", 00:15:09.759 "traddr": "10.0.0.1", 00:15:09.759 "trsvcid": "46720" 00:15:09.759 }, 00:15:09.759 "auth": { 00:15:09.759 "state": "completed", 00:15:09.759 "digest": "sha256", 00:15:09.759 "dhgroup": "ffdhe2048" 00:15:09.759 } 00:15:09.759 } 00:15:09.759 ]' 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.759 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.017 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:10.017 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.584 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.843 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.101 00:15:11.101 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.101 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.101 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.359 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.359 { 00:15:11.359 "cntlid": 13, 00:15:11.359 "qid": 0, 00:15:11.359 "state": "enabled", 00:15:11.359 "thread": "nvmf_tgt_poll_group_000", 00:15:11.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:11.359 "listen_address": { 00:15:11.359 "trtype": "TCP", 00:15:11.359 "adrfam": "IPv4", 00:15:11.359 "traddr": "10.0.0.2", 00:15:11.359 "trsvcid": "4420" 00:15:11.359 }, 00:15:11.359 "peer_address": { 00:15:11.359 "trtype": "TCP", 00:15:11.359 "adrfam": "IPv4", 00:15:11.360 "traddr": "10.0.0.1", 00:15:11.360 "trsvcid": "46746" 00:15:11.360 }, 00:15:11.360 "auth": { 00:15:11.360 "state": "completed", 00:15:11.360 "digest": "sha256", 00:15:11.360 "dhgroup": "ffdhe2048" 00:15:11.360 } 00:15:11.360 } 00:15:11.360 ]' 00:15:11.360 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.360 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.618 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:11.619 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.185 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.443 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.702 00:15:12.702 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.702 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.702 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.960 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.961 { 00:15:12.961 "cntlid": 15, 00:15:12.961 "qid": 0, 00:15:12.961 "state": "enabled", 00:15:12.961 "thread": "nvmf_tgt_poll_group_000", 00:15:12.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:12.961 "listen_address": { 00:15:12.961 "trtype": "TCP", 00:15:12.961 "adrfam": "IPv4", 00:15:12.961 "traddr": "10.0.0.2", 00:15:12.961 "trsvcid": "4420" 00:15:12.961 }, 00:15:12.961 "peer_address": { 00:15:12.961 "trtype": "TCP", 00:15:12.961 "adrfam": "IPv4", 00:15:12.961 "traddr": "10.0.0.1", 00:15:12.961 "trsvcid": "58072" 00:15:12.961 }, 00:15:12.961 "auth": { 00:15:12.961 "state": "completed", 00:15:12.961 "digest": "sha256", 00:15:12.961 "dhgroup": "ffdhe2048" 00:15:12.961 } 00:15:12.961 } 00:15:12.961 ]' 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.961 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.219 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:13.219 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:13.793 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.793 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:13.793 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.794 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.052 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.311 00:15:14.311 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.311 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.311 15:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.570 { 00:15:14.570 "cntlid": 17, 00:15:14.570 "qid": 0, 00:15:14.570 "state": "enabled", 00:15:14.570 "thread": "nvmf_tgt_poll_group_000", 00:15:14.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:14.570 "listen_address": { 00:15:14.570 "trtype": "TCP", 00:15:14.570 "adrfam": "IPv4", 00:15:14.570 "traddr": "10.0.0.2", 00:15:14.570 "trsvcid": "4420" 00:15:14.570 }, 00:15:14.570 "peer_address": { 00:15:14.570 "trtype": "TCP", 00:15:14.570 "adrfam": "IPv4", 00:15:14.570 "traddr": "10.0.0.1", 00:15:14.570 "trsvcid": "58098" 00:15:14.570 }, 00:15:14.570 "auth": { 00:15:14.570 "state": "completed", 00:15:14.570 "digest": "sha256", 00:15:14.570 "dhgroup": "ffdhe3072" 00:15:14.570 } 00:15:14.570 } 00:15:14.570 ]' 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.570 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.829 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:14.829 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.396 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.655 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:15.655 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.655 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.655 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.655 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.656 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.914 00:15:15.914 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.914 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.914 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.174 { 00:15:16.174 "cntlid": 19, 00:15:16.174 "qid": 0, 00:15:16.174 "state": "enabled", 00:15:16.174 "thread": "nvmf_tgt_poll_group_000", 00:15:16.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:16.174 "listen_address": { 00:15:16.174 "trtype": "TCP", 00:15:16.174 "adrfam": "IPv4", 00:15:16.174 "traddr": "10.0.0.2", 00:15:16.174 "trsvcid": "4420" 00:15:16.174 }, 00:15:16.174 "peer_address": { 00:15:16.174 "trtype": "TCP", 00:15:16.174 "adrfam": "IPv4", 00:15:16.174 "traddr": "10.0.0.1", 00:15:16.174 "trsvcid": "58118" 00:15:16.174 }, 00:15:16.174 "auth": { 00:15:16.174 "state": "completed", 00:15:16.174 "digest": "sha256", 00:15:16.174 "dhgroup": "ffdhe3072" 00:15:16.174 } 00:15:16.174 } 00:15:16.174 ]' 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.174 15:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.433 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:16.433 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.001 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.002 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.002 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.260 15:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.519 00:15:17.519 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.519 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.519 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.778 { 00:15:17.778 "cntlid": 21, 00:15:17.778 "qid": 0, 00:15:17.778 "state": "enabled", 00:15:17.778 "thread": "nvmf_tgt_poll_group_000", 00:15:17.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:17.778 "listen_address": { 00:15:17.778 "trtype": "TCP", 00:15:17.778 "adrfam": "IPv4", 00:15:17.778 "traddr": "10.0.0.2", 00:15:17.778 "trsvcid": "4420" 00:15:17.778 }, 00:15:17.778 "peer_address": { 00:15:17.778 "trtype": "TCP", 00:15:17.778 "adrfam": "IPv4", 00:15:17.778 "traddr": "10.0.0.1", 00:15:17.778 "trsvcid": "58142" 00:15:17.778 }, 00:15:17.778 "auth": { 00:15:17.778 "state": "completed", 00:15:17.778 "digest": "sha256", 00:15:17.778 "dhgroup": "ffdhe3072" 00:15:17.778 } 00:15:17.778 } 00:15:17.778 ]' 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.778 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.037 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:18.037 15:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.606 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.866 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.126 00:15:19.126 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.126 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.126 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.390 { 00:15:19.390 "cntlid": 23, 00:15:19.390 "qid": 0, 00:15:19.390 "state": "enabled", 00:15:19.390 "thread": "nvmf_tgt_poll_group_000", 00:15:19.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:19.390 "listen_address": { 00:15:19.390 "trtype": "TCP", 00:15:19.390 "adrfam": "IPv4", 00:15:19.390 "traddr": "10.0.0.2", 00:15:19.390 "trsvcid": "4420" 00:15:19.390 }, 00:15:19.390 "peer_address": { 00:15:19.390 "trtype": "TCP", 00:15:19.390 "adrfam": "IPv4", 00:15:19.390 "traddr": "10.0.0.1", 00:15:19.390 "trsvcid": "58176" 00:15:19.390 }, 00:15:19.390 "auth": { 00:15:19.390 "state": "completed", 00:15:19.390 "digest": "sha256", 00:15:19.390 "dhgroup": "ffdhe3072" 00:15:19.390 } 00:15:19.390 } 00:15:19.390 ]' 00:15:19.390 15:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.390 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.649 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:19.649 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.216 15:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.475 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.734 00:15:20.734 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.734 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.734 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.993 { 00:15:20.993 "cntlid": 25, 00:15:20.993 "qid": 0, 00:15:20.993 "state": "enabled", 00:15:20.993 "thread": "nvmf_tgt_poll_group_000", 00:15:20.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:20.993 "listen_address": { 00:15:20.993 "trtype": "TCP", 00:15:20.993 "adrfam": "IPv4", 00:15:20.993 "traddr": "10.0.0.2", 00:15:20.993 "trsvcid": "4420" 00:15:20.993 }, 00:15:20.993 "peer_address": { 00:15:20.993 "trtype": "TCP", 00:15:20.993 "adrfam": "IPv4", 00:15:20.993 "traddr": "10.0.0.1", 00:15:20.993 "trsvcid": "58198" 00:15:20.993 }, 00:15:20.993 "auth": { 00:15:20.993 "state": "completed", 00:15:20.993 "digest": "sha256", 00:15:20.993 "dhgroup": "ffdhe4096" 00:15:20.993 } 00:15:20.993 } 00:15:20.993 ]' 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.993 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.252 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:21.252 15:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.821 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.081 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.340 00:15:22.340 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.340 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.340 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.599 { 00:15:22.599 "cntlid": 27, 00:15:22.599 "qid": 0, 00:15:22.599 "state": "enabled", 00:15:22.599 "thread": "nvmf_tgt_poll_group_000", 00:15:22.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:22.599 "listen_address": { 00:15:22.599 "trtype": "TCP", 00:15:22.599 "adrfam": "IPv4", 00:15:22.599 "traddr": "10.0.0.2", 00:15:22.599 "trsvcid": "4420" 00:15:22.599 }, 00:15:22.599 "peer_address": { 00:15:22.599 "trtype": "TCP", 00:15:22.599 "adrfam": "IPv4", 00:15:22.599 "traddr": "10.0.0.1", 00:15:22.599 "trsvcid": "58212" 00:15:22.599 }, 00:15:22.599 "auth": { 00:15:22.599 "state": "completed", 00:15:22.599 "digest": "sha256", 00:15:22.599 "dhgroup": "ffdhe4096" 00:15:22.599 } 00:15:22.599 } 00:15:22.599 ]' 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.599 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.858 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:22.858 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.429 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.429 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.691 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.949 00:15:23.949 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.949 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.950 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.208 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.208 { 00:15:24.208 "cntlid": 29, 00:15:24.208 "qid": 0, 00:15:24.208 "state": "enabled", 00:15:24.209 "thread": "nvmf_tgt_poll_group_000", 00:15:24.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:24.209 "listen_address": { 00:15:24.209 "trtype": "TCP", 00:15:24.209 "adrfam": "IPv4", 00:15:24.209 "traddr": "10.0.0.2", 00:15:24.209 "trsvcid": "4420" 00:15:24.209 }, 00:15:24.209 "peer_address": { 00:15:24.209 "trtype": "TCP", 00:15:24.209 "adrfam": "IPv4", 00:15:24.209 "traddr": "10.0.0.1", 00:15:24.209 "trsvcid": "55300" 00:15:24.209 }, 00:15:24.209 "auth": { 00:15:24.209 "state": "completed", 00:15:24.209 "digest": "sha256", 00:15:24.209 "dhgroup": "ffdhe4096" 00:15:24.209 } 00:15:24.209 } 00:15:24.209 ]' 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.209 15:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.468 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:24.468 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.035 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.294 15:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.553 00:15:25.553 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.553 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.553 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.812 { 00:15:25.812 "cntlid": 31, 00:15:25.812 "qid": 0, 00:15:25.812 "state": "enabled", 00:15:25.812 "thread": "nvmf_tgt_poll_group_000", 00:15:25.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:25.812 "listen_address": { 00:15:25.812 "trtype": "TCP", 00:15:25.812 "adrfam": "IPv4", 00:15:25.812 "traddr": "10.0.0.2", 00:15:25.812 "trsvcid": "4420" 00:15:25.812 }, 00:15:25.812 "peer_address": { 00:15:25.812 "trtype": "TCP", 00:15:25.812 "adrfam": "IPv4", 00:15:25.812 "traddr": "10.0.0.1", 00:15:25.812 "trsvcid": "55332" 00:15:25.812 }, 00:15:25.812 "auth": { 00:15:25.812 "state": "completed", 00:15:25.812 "digest": "sha256", 00:15:25.812 "dhgroup": "ffdhe4096" 00:15:25.812 } 00:15:25.812 } 00:15:25.812 ]' 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.812 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.813 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.813 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.072 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.072 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.072 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.072 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:26.072 15:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.641 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.901 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.160 00:15:27.419 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.419 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.419 15:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.419 { 00:15:27.419 "cntlid": 33, 00:15:27.419 "qid": 0, 00:15:27.419 "state": "enabled", 00:15:27.419 "thread": "nvmf_tgt_poll_group_000", 00:15:27.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:27.419 "listen_address": { 00:15:27.419 "trtype": "TCP", 00:15:27.419 "adrfam": "IPv4", 00:15:27.419 "traddr": "10.0.0.2", 00:15:27.419 "trsvcid": "4420" 00:15:27.419 }, 00:15:27.419 "peer_address": { 00:15:27.419 "trtype": "TCP", 00:15:27.419 "adrfam": "IPv4", 00:15:27.419 "traddr": "10.0.0.1", 00:15:27.419 "trsvcid": "55352" 00:15:27.419 }, 00:15:27.419 "auth": { 00:15:27.419 "state": "completed", 00:15:27.419 "digest": "sha256", 00:15:27.419 "dhgroup": "ffdhe6144" 00:15:27.419 } 00:15:27.419 } 00:15:27.419 ]' 00:15:27.419 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.678 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.937 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:27.937 15:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.505 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.764 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.764 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.764 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.764 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.023 00:15:29.023 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.023 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.023 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.283 { 00:15:29.283 "cntlid": 35, 00:15:29.283 "qid": 0, 00:15:29.283 "state": "enabled", 00:15:29.283 "thread": "nvmf_tgt_poll_group_000", 00:15:29.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:29.283 "listen_address": { 00:15:29.283 "trtype": "TCP", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.2", 00:15:29.283 "trsvcid": "4420" 00:15:29.283 }, 00:15:29.283 "peer_address": { 00:15:29.283 "trtype": "TCP", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.1", 00:15:29.283 "trsvcid": "55376" 00:15:29.283 }, 00:15:29.283 "auth": { 00:15:29.283 "state": "completed", 00:15:29.283 "digest": "sha256", 00:15:29.283 "dhgroup": "ffdhe6144" 00:15:29.283 } 00:15:29.283 } 00:15:29.283 ]' 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.283 15:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.283 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.283 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.283 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.542 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:29.542 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.109 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.368 15:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.368 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.368 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.368 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.368 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.626 00:15:30.626 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.626 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.626 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.884 { 00:15:30.884 "cntlid": 37, 00:15:30.884 "qid": 0, 00:15:30.884 "state": "enabled", 00:15:30.884 "thread": "nvmf_tgt_poll_group_000", 00:15:30.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:30.884 "listen_address": { 00:15:30.884 "trtype": "TCP", 00:15:30.884 "adrfam": "IPv4", 00:15:30.884 "traddr": "10.0.0.2", 00:15:30.884 "trsvcid": "4420" 00:15:30.884 }, 00:15:30.884 "peer_address": { 00:15:30.884 "trtype": "TCP", 00:15:30.884 "adrfam": "IPv4", 00:15:30.884 "traddr": "10.0.0.1", 00:15:30.884 "trsvcid": "55412" 00:15:30.884 }, 00:15:30.884 "auth": { 00:15:30.884 "state": "completed", 00:15:30.884 "digest": "sha256", 00:15:30.884 "dhgroup": "ffdhe6144" 00:15:30.884 } 00:15:30.884 } 00:15:30.884 ]' 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.884 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.143 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.143 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.143 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.143 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:31.143 15:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.710 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.968 15:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.536 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.536 { 00:15:32.536 "cntlid": 39, 00:15:32.536 "qid": 0, 00:15:32.536 "state": "enabled", 00:15:32.536 "thread": "nvmf_tgt_poll_group_000", 00:15:32.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:32.536 "listen_address": { 00:15:32.536 "trtype": "TCP", 00:15:32.536 "adrfam": "IPv4", 00:15:32.536 "traddr": "10.0.0.2", 00:15:32.536 "trsvcid": "4420" 00:15:32.536 }, 00:15:32.536 "peer_address": { 00:15:32.536 "trtype": "TCP", 00:15:32.536 "adrfam": "IPv4", 00:15:32.536 "traddr": "10.0.0.1", 00:15:32.536 "trsvcid": "55438" 00:15:32.536 }, 00:15:32.536 "auth": { 00:15:32.536 "state": "completed", 00:15:32.536 "digest": "sha256", 00:15:32.536 "dhgroup": "ffdhe6144" 00:15:32.536 } 00:15:32.536 } 00:15:32.536 ]' 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.536 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.794 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.794 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.795 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.795 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.795 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.053 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:33.053 15:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.619 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.186 00:15:34.186 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.186 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.186 15:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.444 { 00:15:34.444 "cntlid": 41, 00:15:34.444 "qid": 0, 00:15:34.444 "state": "enabled", 00:15:34.444 "thread": "nvmf_tgt_poll_group_000", 00:15:34.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:34.444 "listen_address": { 00:15:34.444 "trtype": "TCP", 00:15:34.444 "adrfam": "IPv4", 00:15:34.444 "traddr": "10.0.0.2", 00:15:34.444 "trsvcid": "4420" 00:15:34.444 }, 00:15:34.444 "peer_address": { 00:15:34.444 "trtype": "TCP", 00:15:34.444 "adrfam": "IPv4", 00:15:34.444 "traddr": "10.0.0.1", 00:15:34.444 "trsvcid": "42908" 00:15:34.444 }, 00:15:34.444 "auth": { 00:15:34.444 "state": "completed", 00:15:34.444 "digest": "sha256", 00:15:34.444 "dhgroup": "ffdhe8192" 00:15:34.444 } 00:15:34.444 } 00:15:34.444 ]' 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.444 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.445 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.445 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.445 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.445 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.445 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.703 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:34.703 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.270 15:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.528 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.095 00:15:36.095 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.095 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.095 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.095 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.354 { 00:15:36.354 "cntlid": 43, 00:15:36.354 "qid": 0, 00:15:36.354 "state": "enabled", 00:15:36.354 "thread": "nvmf_tgt_poll_group_000", 00:15:36.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:36.354 "listen_address": { 00:15:36.354 "trtype": "TCP", 00:15:36.354 "adrfam": "IPv4", 00:15:36.354 "traddr": "10.0.0.2", 00:15:36.354 "trsvcid": "4420" 00:15:36.354 }, 00:15:36.354 "peer_address": { 00:15:36.354 "trtype": "TCP", 00:15:36.354 "adrfam": "IPv4", 00:15:36.354 "traddr": "10.0.0.1", 00:15:36.354 "trsvcid": "42948" 00:15:36.354 }, 00:15:36.354 "auth": { 00:15:36.354 "state": "completed", 00:15:36.354 "digest": "sha256", 00:15:36.354 "dhgroup": "ffdhe8192" 00:15:36.354 } 00:15:36.354 } 00:15:36.354 ]' 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.354 15:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.354 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.354 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.354 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.613 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:36.613 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.180 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.439 15:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.697 00:15:37.697 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.697 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.697 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.956 { 00:15:37.956 "cntlid": 45, 00:15:37.956 "qid": 0, 00:15:37.956 "state": "enabled", 00:15:37.956 "thread": "nvmf_tgt_poll_group_000", 00:15:37.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:37.956 "listen_address": { 00:15:37.956 "trtype": "TCP", 00:15:37.956 "adrfam": "IPv4", 00:15:37.956 "traddr": "10.0.0.2", 00:15:37.956 "trsvcid": "4420" 00:15:37.956 }, 00:15:37.956 "peer_address": { 00:15:37.956 "trtype": "TCP", 00:15:37.956 "adrfam": "IPv4", 00:15:37.956 "traddr": "10.0.0.1", 00:15:37.956 "trsvcid": "42972" 00:15:37.956 }, 00:15:37.956 "auth": { 00:15:37.956 "state": "completed", 00:15:37.956 "digest": "sha256", 00:15:37.956 "dhgroup": "ffdhe8192" 00:15:37.956 } 00:15:37.956 } 00:15:37.956 ]' 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.956 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:38.214 15:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:38.779 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.779 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:38.779 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.779 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.037 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.038 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.038 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.038 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.038 15:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.605 00:15:39.605 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.605 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.605 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.863 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.863 { 00:15:39.863 "cntlid": 47, 00:15:39.863 "qid": 0, 00:15:39.863 "state": "enabled", 00:15:39.863 "thread": "nvmf_tgt_poll_group_000", 00:15:39.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:39.863 "listen_address": { 00:15:39.863 "trtype": "TCP", 00:15:39.863 "adrfam": "IPv4", 00:15:39.863 "traddr": "10.0.0.2", 00:15:39.863 "trsvcid": "4420" 00:15:39.863 }, 00:15:39.863 "peer_address": { 00:15:39.863 "trtype": "TCP", 00:15:39.863 "adrfam": "IPv4", 00:15:39.863 "traddr": "10.0.0.1", 00:15:39.863 "trsvcid": "43014" 00:15:39.863 }, 00:15:39.863 "auth": { 00:15:39.863 "state": "completed", 00:15:39.863 "digest": "sha256", 00:15:39.863 "dhgroup": "ffdhe8192" 00:15:39.863 } 00:15:39.863 } 00:15:39.863 ]' 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.864 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.122 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:40.122 15:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.705 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.025 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.025 00:15:41.322 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.322 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.322 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.322 { 00:15:41.322 "cntlid": 49, 00:15:41.322 "qid": 0, 00:15:41.322 "state": "enabled", 00:15:41.322 "thread": "nvmf_tgt_poll_group_000", 00:15:41.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:41.322 "listen_address": { 00:15:41.322 "trtype": "TCP", 00:15:41.322 "adrfam": "IPv4", 00:15:41.322 "traddr": "10.0.0.2", 00:15:41.322 "trsvcid": "4420" 00:15:41.322 }, 00:15:41.322 "peer_address": { 00:15:41.322 "trtype": "TCP", 00:15:41.322 "adrfam": "IPv4", 00:15:41.322 "traddr": "10.0.0.1", 00:15:41.322 "trsvcid": "43046" 00:15:41.322 }, 00:15:41.322 "auth": { 00:15:41.322 "state": "completed", 00:15:41.322 "digest": "sha384", 00:15:41.322 "dhgroup": "null" 00:15:41.322 } 00:15:41.322 } 00:15:41.322 ]' 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.322 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:41.581 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:42.148 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.405 15:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.405 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.406 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.406 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.663 00:15:42.663 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.663 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.663 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.922 { 00:15:42.922 "cntlid": 51, 00:15:42.922 "qid": 0, 00:15:42.922 "state": "enabled", 00:15:42.922 "thread": "nvmf_tgt_poll_group_000", 00:15:42.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:42.922 "listen_address": { 00:15:42.922 "trtype": "TCP", 00:15:42.922 "adrfam": "IPv4", 00:15:42.922 "traddr": "10.0.0.2", 00:15:42.922 "trsvcid": "4420" 00:15:42.922 }, 00:15:42.922 "peer_address": { 00:15:42.922 "trtype": "TCP", 00:15:42.922 "adrfam": "IPv4", 00:15:42.922 "traddr": "10.0.0.1", 00:15:42.922 "trsvcid": "54266" 00:15:42.922 }, 00:15:42.922 "auth": { 00:15:42.922 "state": "completed", 00:15:42.922 "digest": "sha384", 00:15:42.922 "dhgroup": "null" 00:15:42.922 } 00:15:42.922 } 00:15:42.922 ]' 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.922 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:43.180 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:43.746 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.004 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.262 00:15:44.262 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.262 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.262 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.520 { 00:15:44.520 "cntlid": 53, 00:15:44.520 "qid": 0, 00:15:44.520 "state": "enabled", 00:15:44.520 "thread": "nvmf_tgt_poll_group_000", 00:15:44.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:44.520 "listen_address": { 00:15:44.520 "trtype": "TCP", 00:15:44.520 "adrfam": "IPv4", 00:15:44.520 "traddr": "10.0.0.2", 00:15:44.520 "trsvcid": "4420" 00:15:44.520 }, 00:15:44.520 "peer_address": { 00:15:44.520 "trtype": "TCP", 00:15:44.520 "adrfam": "IPv4", 00:15:44.520 "traddr": "10.0.0.1", 00:15:44.520 "trsvcid": "54298" 00:15:44.520 }, 00:15:44.520 "auth": { 00:15:44.520 "state": "completed", 00:15:44.520 "digest": "sha384", 00:15:44.520 "dhgroup": "null" 00:15:44.520 } 00:15:44.520 } 00:15:44.520 ]' 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.520 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.778 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.778 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.778 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.778 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:44.779 15:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.345 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.603 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:45.603 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.603 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.604 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.861 00:15:45.861 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.861 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.861 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.120 { 00:15:46.120 "cntlid": 55, 00:15:46.120 "qid": 0, 00:15:46.120 "state": "enabled", 00:15:46.120 "thread": "nvmf_tgt_poll_group_000", 00:15:46.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:46.120 "listen_address": { 00:15:46.120 "trtype": "TCP", 00:15:46.120 "adrfam": "IPv4", 00:15:46.120 "traddr": "10.0.0.2", 00:15:46.120 "trsvcid": "4420" 00:15:46.120 }, 00:15:46.120 "peer_address": { 00:15:46.120 "trtype": "TCP", 00:15:46.120 "adrfam": "IPv4", 00:15:46.120 "traddr": "10.0.0.1", 00:15:46.120 "trsvcid": "54336" 00:15:46.120 }, 00:15:46.120 "auth": { 00:15:46.120 "state": "completed", 00:15:46.120 "digest": "sha384", 00:15:46.120 "dhgroup": "null" 00:15:46.120 } 00:15:46.120 } 00:15:46.120 ]' 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.120 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.378 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.378 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.378 15:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.378 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:46.378 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.945 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.204 15:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.463 00:15:47.463 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.463 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.463 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.722 { 00:15:47.722 "cntlid": 57, 00:15:47.722 "qid": 0, 00:15:47.722 "state": "enabled", 00:15:47.722 "thread": "nvmf_tgt_poll_group_000", 00:15:47.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:47.722 "listen_address": { 00:15:47.722 "trtype": "TCP", 00:15:47.722 "adrfam": "IPv4", 00:15:47.722 "traddr": "10.0.0.2", 00:15:47.722 "trsvcid": "4420" 00:15:47.722 }, 00:15:47.722 "peer_address": { 00:15:47.722 "trtype": "TCP", 00:15:47.722 "adrfam": "IPv4", 00:15:47.722 "traddr": "10.0.0.1", 00:15:47.722 "trsvcid": "54380" 00:15:47.722 }, 00:15:47.722 "auth": { 00:15:47.722 "state": "completed", 00:15:47.722 "digest": "sha384", 00:15:47.722 "dhgroup": "ffdhe2048" 00:15:47.722 } 00:15:47.722 } 00:15:47.722 ]' 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.722 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.980 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.980 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.980 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.980 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:47.980 15:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.547 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.806 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.065 00:15:49.065 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.065 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.065 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.323 { 00:15:49.323 "cntlid": 59, 00:15:49.323 "qid": 0, 00:15:49.323 "state": "enabled", 00:15:49.323 "thread": "nvmf_tgt_poll_group_000", 00:15:49.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:49.323 "listen_address": { 00:15:49.323 "trtype": "TCP", 00:15:49.323 "adrfam": "IPv4", 00:15:49.323 "traddr": "10.0.0.2", 00:15:49.323 "trsvcid": "4420" 00:15:49.323 }, 00:15:49.323 "peer_address": { 00:15:49.323 "trtype": "TCP", 00:15:49.323 "adrfam": "IPv4", 00:15:49.323 "traddr": "10.0.0.1", 00:15:49.323 "trsvcid": "54402" 00:15:49.323 }, 00:15:49.323 "auth": { 00:15:49.323 "state": "completed", 00:15:49.323 "digest": "sha384", 00:15:49.323 "dhgroup": "ffdhe2048" 00:15:49.323 } 00:15:49.323 } 00:15:49.323 ]' 00:15:49.323 15:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.323 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.323 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.323 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.323 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.324 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.324 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.324 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.582 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:49.582 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.149 15:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.408 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.667 00:15:50.667 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.667 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.667 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.925 { 00:15:50.925 "cntlid": 61, 00:15:50.925 "qid": 0, 00:15:50.925 "state": "enabled", 00:15:50.925 "thread": "nvmf_tgt_poll_group_000", 00:15:50.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:50.925 "listen_address": { 00:15:50.925 "trtype": "TCP", 00:15:50.925 "adrfam": "IPv4", 00:15:50.925 "traddr": "10.0.0.2", 00:15:50.925 "trsvcid": "4420" 00:15:50.925 }, 00:15:50.925 "peer_address": { 00:15:50.925 "trtype": "TCP", 00:15:50.925 "adrfam": "IPv4", 00:15:50.925 "traddr": "10.0.0.1", 00:15:50.925 "trsvcid": "54446" 00:15:50.925 }, 00:15:50.925 "auth": { 00:15:50.925 "state": "completed", 00:15:50.925 "digest": "sha384", 00:15:50.925 "dhgroup": "ffdhe2048" 00:15:50.925 } 00:15:50.925 } 00:15:50.925 ]' 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.925 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.184 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:51.184 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.751 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.010 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.269 00:15:52.269 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.269 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.269 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.527 { 00:15:52.527 "cntlid": 63, 00:15:52.527 "qid": 0, 00:15:52.527 "state": "enabled", 00:15:52.527 "thread": "nvmf_tgt_poll_group_000", 00:15:52.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:52.527 "listen_address": { 00:15:52.527 "trtype": "TCP", 00:15:52.527 "adrfam": "IPv4", 00:15:52.527 "traddr": "10.0.0.2", 00:15:52.527 "trsvcid": "4420" 00:15:52.527 }, 00:15:52.527 "peer_address": { 00:15:52.527 "trtype": "TCP", 00:15:52.527 "adrfam": "IPv4", 00:15:52.527 "traddr": "10.0.0.1", 00:15:52.527 "trsvcid": "54478" 00:15:52.527 }, 00:15:52.527 "auth": { 00:15:52.527 "state": "completed", 00:15:52.527 "digest": "sha384", 00:15:52.527 "dhgroup": "ffdhe2048" 00:15:52.527 } 00:15:52.527 } 00:15:52.527 ]' 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.527 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.786 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:52.786 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.353 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.611 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.869 00:15:53.869 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.869 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.869 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.128 { 00:15:54.128 "cntlid": 65, 00:15:54.128 "qid": 0, 00:15:54.128 "state": "enabled", 00:15:54.128 "thread": "nvmf_tgt_poll_group_000", 00:15:54.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:54.128 "listen_address": { 00:15:54.128 "trtype": "TCP", 00:15:54.128 "adrfam": "IPv4", 00:15:54.128 "traddr": "10.0.0.2", 00:15:54.128 "trsvcid": "4420" 00:15:54.128 }, 00:15:54.128 "peer_address": { 00:15:54.128 "trtype": "TCP", 00:15:54.128 "adrfam": "IPv4", 00:15:54.128 "traddr": "10.0.0.1", 00:15:54.128 "trsvcid": "35556" 00:15:54.128 }, 00:15:54.128 "auth": { 00:15:54.128 "state": "completed", 00:15:54.128 "digest": "sha384", 00:15:54.128 "dhgroup": "ffdhe3072" 00:15:54.128 } 00:15:54.128 } 00:15:54.128 ]' 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.128 15:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.387 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:54.387 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:54.954 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.212 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.470 00:15:55.471 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.471 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.471 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.729 { 00:15:55.729 "cntlid": 67, 00:15:55.729 "qid": 0, 00:15:55.729 "state": "enabled", 00:15:55.729 "thread": "nvmf_tgt_poll_group_000", 00:15:55.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:55.729 "listen_address": { 00:15:55.729 "trtype": "TCP", 00:15:55.729 "adrfam": "IPv4", 00:15:55.729 "traddr": "10.0.0.2", 00:15:55.729 "trsvcid": "4420" 00:15:55.729 }, 00:15:55.729 "peer_address": { 00:15:55.729 "trtype": "TCP", 00:15:55.729 "adrfam": "IPv4", 00:15:55.729 "traddr": "10.0.0.1", 00:15:55.729 "trsvcid": "35576" 00:15:55.729 }, 00:15:55.729 "auth": { 00:15:55.729 "state": "completed", 00:15:55.729 "digest": "sha384", 00:15:55.729 "dhgroup": "ffdhe3072" 00:15:55.729 } 00:15:55.729 } 00:15:55.729 ]' 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.729 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.987 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.988 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.988 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.988 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:55.988 15:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.555 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.813 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.072 00:15:57.072 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.072 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.072 15:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.330 { 00:15:57.330 "cntlid": 69, 00:15:57.330 "qid": 0, 00:15:57.330 "state": "enabled", 00:15:57.330 "thread": "nvmf_tgt_poll_group_000", 00:15:57.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:57.330 "listen_address": { 00:15:57.330 "trtype": "TCP", 00:15:57.330 "adrfam": "IPv4", 00:15:57.330 "traddr": "10.0.0.2", 00:15:57.330 "trsvcid": "4420" 00:15:57.330 }, 00:15:57.330 "peer_address": { 00:15:57.330 "trtype": "TCP", 00:15:57.330 "adrfam": "IPv4", 00:15:57.330 "traddr": "10.0.0.1", 00:15:57.330 "trsvcid": "35604" 00:15:57.330 }, 00:15:57.330 "auth": { 00:15:57.330 "state": "completed", 00:15:57.330 "digest": "sha384", 00:15:57.330 "dhgroup": "ffdhe3072" 00:15:57.330 } 00:15:57.330 } 00:15:57.330 ]' 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.330 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.331 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.331 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.589 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.589 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.589 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.589 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:57.589 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.156 15:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.414 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.672 00:15:58.672 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.672 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.672 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.931 { 00:15:58.931 "cntlid": 71, 00:15:58.931 "qid": 0, 00:15:58.931 "state": "enabled", 00:15:58.931 "thread": "nvmf_tgt_poll_group_000", 00:15:58.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:58.931 "listen_address": { 00:15:58.931 "trtype": "TCP", 00:15:58.931 "adrfam": "IPv4", 00:15:58.931 "traddr": "10.0.0.2", 00:15:58.931 "trsvcid": "4420" 00:15:58.931 }, 00:15:58.931 "peer_address": { 00:15:58.931 "trtype": "TCP", 00:15:58.931 "adrfam": "IPv4", 00:15:58.931 "traddr": "10.0.0.1", 00:15:58.931 "trsvcid": "35632" 00:15:58.931 }, 00:15:58.931 "auth": { 00:15:58.931 "state": "completed", 00:15:58.931 "digest": "sha384", 00:15:58.931 "dhgroup": "ffdhe3072" 00:15:58.931 } 00:15:58.931 } 00:15:58.931 ]' 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.931 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.190 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:59.190 15:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.756 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.014 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.272 00:16:00.272 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.272 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.272 15:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.531 { 00:16:00.531 "cntlid": 73, 00:16:00.531 "qid": 0, 00:16:00.531 "state": "enabled", 00:16:00.531 "thread": "nvmf_tgt_poll_group_000", 00:16:00.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:00.531 "listen_address": { 00:16:00.531 "trtype": "TCP", 00:16:00.531 "adrfam": "IPv4", 00:16:00.531 "traddr": "10.0.0.2", 00:16:00.531 "trsvcid": "4420" 00:16:00.531 }, 00:16:00.531 "peer_address": { 00:16:00.531 "trtype": "TCP", 00:16:00.531 "adrfam": "IPv4", 00:16:00.531 "traddr": "10.0.0.1", 00:16:00.531 "trsvcid": "35660" 00:16:00.531 }, 00:16:00.531 "auth": { 00:16:00.531 "state": "completed", 00:16:00.531 "digest": "sha384", 00:16:00.531 "dhgroup": "ffdhe4096" 00:16:00.531 } 00:16:00.531 } 00:16:00.531 ]' 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.531 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.790 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:00.790 15:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.357 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.616 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.874 00:16:01.874 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.874 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.874 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.133 { 00:16:02.133 "cntlid": 75, 00:16:02.133 "qid": 0, 00:16:02.133 "state": "enabled", 00:16:02.133 "thread": "nvmf_tgt_poll_group_000", 00:16:02.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:02.133 "listen_address": { 00:16:02.133 "trtype": "TCP", 00:16:02.133 "adrfam": "IPv4", 00:16:02.133 "traddr": "10.0.0.2", 00:16:02.133 "trsvcid": "4420" 00:16:02.133 }, 00:16:02.133 "peer_address": { 00:16:02.133 "trtype": "TCP", 00:16:02.133 "adrfam": "IPv4", 00:16:02.133 "traddr": "10.0.0.1", 00:16:02.133 "trsvcid": "35696" 00:16:02.133 }, 00:16:02.133 "auth": { 00:16:02.133 "state": "completed", 00:16:02.133 "digest": "sha384", 00:16:02.133 "dhgroup": "ffdhe4096" 00:16:02.133 } 00:16:02.133 } 00:16:02.133 ]' 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.133 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.391 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.391 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.391 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.391 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:02.391 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.957 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.215 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.216 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.473 00:16:03.473 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.473 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.473 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.731 { 00:16:03.731 "cntlid": 77, 00:16:03.731 "qid": 0, 00:16:03.731 "state": "enabled", 00:16:03.731 "thread": "nvmf_tgt_poll_group_000", 00:16:03.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:03.731 "listen_address": { 00:16:03.731 "trtype": "TCP", 00:16:03.731 "adrfam": "IPv4", 00:16:03.731 "traddr": "10.0.0.2", 00:16:03.731 "trsvcid": "4420" 00:16:03.731 }, 00:16:03.731 "peer_address": { 00:16:03.731 "trtype": "TCP", 00:16:03.731 "adrfam": "IPv4", 00:16:03.731 "traddr": "10.0.0.1", 00:16:03.731 "trsvcid": "42958" 00:16:03.731 }, 00:16:03.731 "auth": { 00:16:03.731 "state": "completed", 00:16:03.731 "digest": "sha384", 00:16:03.731 "dhgroup": "ffdhe4096" 00:16:03.731 } 00:16:03.731 } 00:16:03.731 ]' 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.731 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.989 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.989 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.989 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.989 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:03.989 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:04.556 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.814 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.815 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.073 00:16:05.073 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.073 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.073 15:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.334 { 00:16:05.334 "cntlid": 79, 00:16:05.334 "qid": 0, 00:16:05.334 "state": "enabled", 00:16:05.334 "thread": "nvmf_tgt_poll_group_000", 00:16:05.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:05.334 "listen_address": { 00:16:05.334 "trtype": "TCP", 00:16:05.334 "adrfam": "IPv4", 00:16:05.334 "traddr": "10.0.0.2", 00:16:05.334 "trsvcid": "4420" 00:16:05.334 }, 00:16:05.334 "peer_address": { 00:16:05.334 "trtype": "TCP", 00:16:05.334 "adrfam": "IPv4", 00:16:05.334 "traddr": "10.0.0.1", 00:16:05.334 "trsvcid": "42978" 00:16:05.334 }, 00:16:05.334 "auth": { 00:16:05.334 "state": "completed", 00:16:05.334 "digest": "sha384", 00:16:05.334 "dhgroup": "ffdhe4096" 00:16:05.334 } 00:16:05.334 } 00:16:05.334 ]' 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.334 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.850 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:05.850 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.416 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.416 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.983 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.983 { 00:16:06.983 "cntlid": 81, 00:16:06.983 "qid": 0, 00:16:06.983 "state": "enabled", 00:16:06.983 "thread": "nvmf_tgt_poll_group_000", 00:16:06.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:06.983 "listen_address": { 00:16:06.983 "trtype": "TCP", 00:16:06.983 "adrfam": "IPv4", 00:16:06.983 "traddr": "10.0.0.2", 00:16:06.983 "trsvcid": "4420" 00:16:06.983 }, 00:16:06.983 "peer_address": { 00:16:06.983 "trtype": "TCP", 00:16:06.983 "adrfam": "IPv4", 00:16:06.983 "traddr": "10.0.0.1", 00:16:06.983 "trsvcid": "43004" 00:16:06.983 }, 00:16:06.983 "auth": { 00:16:06.983 "state": "completed", 00:16:06.983 "digest": "sha384", 00:16:06.983 "dhgroup": "ffdhe6144" 00:16:06.983 } 00:16:06.983 } 00:16:06.983 ]' 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.983 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.241 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.241 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.241 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.241 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.241 15:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.499 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:07.499 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.066 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.633 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.633 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.891 { 00:16:08.891 "cntlid": 83, 00:16:08.891 "qid": 0, 00:16:08.891 "state": "enabled", 00:16:08.891 "thread": "nvmf_tgt_poll_group_000", 00:16:08.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:08.891 "listen_address": { 00:16:08.891 "trtype": "TCP", 00:16:08.891 "adrfam": "IPv4", 00:16:08.891 "traddr": "10.0.0.2", 00:16:08.891 "trsvcid": "4420" 00:16:08.891 }, 00:16:08.891 "peer_address": { 00:16:08.891 "trtype": "TCP", 00:16:08.891 "adrfam": "IPv4", 00:16:08.891 "traddr": "10.0.0.1", 00:16:08.891 "trsvcid": "43044" 00:16:08.891 }, 00:16:08.891 "auth": { 00:16:08.891 "state": "completed", 00:16:08.891 "digest": "sha384", 00:16:08.891 "dhgroup": "ffdhe6144" 00:16:08.891 } 00:16:08.891 } 00:16:08.891 ]' 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.891 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.150 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:09.150 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.716 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.975 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.234 00:16:10.234 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.234 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.234 15:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.493 { 00:16:10.493 "cntlid": 85, 00:16:10.493 "qid": 0, 00:16:10.493 "state": "enabled", 00:16:10.493 "thread": "nvmf_tgt_poll_group_000", 00:16:10.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:10.493 "listen_address": { 00:16:10.493 "trtype": "TCP", 00:16:10.493 "adrfam": "IPv4", 00:16:10.493 "traddr": "10.0.0.2", 00:16:10.493 "trsvcid": "4420" 00:16:10.493 }, 00:16:10.493 "peer_address": { 00:16:10.493 "trtype": "TCP", 00:16:10.493 "adrfam": "IPv4", 00:16:10.493 "traddr": "10.0.0.1", 00:16:10.493 "trsvcid": "43076" 00:16:10.493 }, 00:16:10.493 "auth": { 00:16:10.493 "state": "completed", 00:16:10.493 "digest": "sha384", 00:16:10.493 "dhgroup": "ffdhe6144" 00:16:10.493 } 00:16:10.493 } 00:16:10.493 ]' 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.493 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.751 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:10.751 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.317 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.576 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.860 00:16:11.860 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.860 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.860 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.119 { 00:16:12.119 "cntlid": 87, 00:16:12.119 "qid": 0, 00:16:12.119 "state": "enabled", 00:16:12.119 "thread": "nvmf_tgt_poll_group_000", 00:16:12.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:12.119 "listen_address": { 00:16:12.119 "trtype": "TCP", 00:16:12.119 "adrfam": "IPv4", 00:16:12.119 "traddr": "10.0.0.2", 00:16:12.119 "trsvcid": "4420" 00:16:12.119 }, 00:16:12.119 "peer_address": { 00:16:12.119 "trtype": "TCP", 00:16:12.119 "adrfam": "IPv4", 00:16:12.119 "traddr": "10.0.0.1", 00:16:12.119 "trsvcid": "43092" 00:16:12.119 }, 00:16:12.119 "auth": { 00:16:12.119 "state": "completed", 00:16:12.119 "digest": "sha384", 00:16:12.119 "dhgroup": "ffdhe6144" 00:16:12.119 } 00:16:12.119 } 00:16:12.119 ]' 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.119 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.378 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:12.378 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.944 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.203 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.776 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.776 { 00:16:13.776 "cntlid": 89, 00:16:13.776 "qid": 0, 00:16:13.776 "state": "enabled", 00:16:13.776 "thread": "nvmf_tgt_poll_group_000", 00:16:13.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:13.776 "listen_address": { 00:16:13.776 "trtype": "TCP", 00:16:13.776 "adrfam": "IPv4", 00:16:13.776 "traddr": "10.0.0.2", 00:16:13.776 "trsvcid": "4420" 00:16:13.776 }, 00:16:13.776 "peer_address": { 00:16:13.776 "trtype": "TCP", 00:16:13.776 "adrfam": "IPv4", 00:16:13.776 "traddr": "10.0.0.1", 00:16:13.776 "trsvcid": "51272" 00:16:13.776 }, 00:16:13.776 "auth": { 00:16:13.776 "state": "completed", 00:16:13.776 "digest": "sha384", 00:16:13.776 "dhgroup": "ffdhe8192" 00:16:13.776 } 00:16:13.776 } 00:16:13.776 ]' 00:16:13.776 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.033 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.291 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:14.292 15:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.858 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.116 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.117 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.117 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.375 00:16:15.375 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.375 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.375 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.633 { 00:16:15.633 "cntlid": 91, 00:16:15.633 "qid": 0, 00:16:15.633 "state": "enabled", 00:16:15.633 "thread": "nvmf_tgt_poll_group_000", 00:16:15.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:15.633 "listen_address": { 00:16:15.633 "trtype": "TCP", 00:16:15.633 "adrfam": "IPv4", 00:16:15.633 "traddr": "10.0.0.2", 00:16:15.633 "trsvcid": "4420" 00:16:15.633 }, 00:16:15.633 "peer_address": { 00:16:15.633 "trtype": "TCP", 00:16:15.633 "adrfam": "IPv4", 00:16:15.633 "traddr": "10.0.0.1", 00:16:15.633 "trsvcid": "51296" 00:16:15.633 }, 00:16:15.633 "auth": { 00:16:15.633 "state": "completed", 00:16:15.633 "digest": "sha384", 00:16:15.633 "dhgroup": "ffdhe8192" 00:16:15.633 } 00:16:15.633 } 00:16:15.633 ]' 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.633 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.891 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.891 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.891 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.891 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.891 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.150 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:16.150 15:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.717 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.390 00:16:17.390 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.390 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.390 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.390 { 00:16:17.390 "cntlid": 93, 00:16:17.390 "qid": 0, 00:16:17.390 "state": "enabled", 00:16:17.390 "thread": "nvmf_tgt_poll_group_000", 00:16:17.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:17.390 "listen_address": { 00:16:17.390 "trtype": "TCP", 00:16:17.390 "adrfam": "IPv4", 00:16:17.390 "traddr": "10.0.0.2", 00:16:17.390 "trsvcid": "4420" 00:16:17.390 }, 00:16:17.390 "peer_address": { 00:16:17.390 "trtype": "TCP", 00:16:17.390 "adrfam": "IPv4", 00:16:17.390 "traddr": "10.0.0.1", 00:16:17.390 "trsvcid": "51318" 00:16:17.390 }, 00:16:17.390 "auth": { 00:16:17.390 "state": "completed", 00:16:17.390 "digest": "sha384", 00:16:17.390 "dhgroup": "ffdhe8192" 00:16:17.390 } 00:16:17.390 } 00:16:17.390 ]' 00:16:17.390 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.669 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.927 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:17.927 15:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:18.493 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.494 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.752 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.752 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.752 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.752 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.010 00:16:19.010 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.010 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.010 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.269 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.269 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.269 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.269 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.269 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.269 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.269 { 00:16:19.269 "cntlid": 95, 00:16:19.269 "qid": 0, 00:16:19.269 "state": "enabled", 00:16:19.269 "thread": "nvmf_tgt_poll_group_000", 00:16:19.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:19.269 "listen_address": { 00:16:19.269 "trtype": "TCP", 00:16:19.269 "adrfam": "IPv4", 00:16:19.269 "traddr": "10.0.0.2", 00:16:19.269 "trsvcid": "4420" 00:16:19.269 }, 00:16:19.269 "peer_address": { 00:16:19.269 "trtype": "TCP", 00:16:19.269 "adrfam": "IPv4", 00:16:19.269 "traddr": "10.0.0.1", 00:16:19.269 "trsvcid": "51350" 00:16:19.269 }, 00:16:19.269 "auth": { 00:16:19.269 "state": "completed", 00:16:19.269 "digest": "sha384", 00:16:19.269 "dhgroup": "ffdhe8192" 00:16:19.269 } 00:16:19.269 } 00:16:19.269 ]' 00:16:19.269 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.269 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.269 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.527 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.527 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.527 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.527 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.527 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.786 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:19.786 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.353 15:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.353 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.354 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.354 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.354 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.612 00:16:20.612 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.612 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.612 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.870 { 00:16:20.870 "cntlid": 97, 00:16:20.870 "qid": 0, 00:16:20.870 "state": "enabled", 00:16:20.870 "thread": "nvmf_tgt_poll_group_000", 00:16:20.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:20.870 "listen_address": { 00:16:20.870 "trtype": "TCP", 00:16:20.870 "adrfam": "IPv4", 00:16:20.870 "traddr": "10.0.0.2", 00:16:20.870 "trsvcid": "4420" 00:16:20.870 }, 00:16:20.870 "peer_address": { 00:16:20.870 "trtype": "TCP", 00:16:20.870 "adrfam": "IPv4", 00:16:20.870 "traddr": "10.0.0.1", 00:16:20.870 "trsvcid": "51364" 00:16:20.870 }, 00:16:20.870 "auth": { 00:16:20.870 "state": "completed", 00:16:20.870 "digest": "sha512", 00:16:20.870 "dhgroup": "null" 00:16:20.870 } 00:16:20.870 } 00:16:20.870 ]' 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.870 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:21.128 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:21.695 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.695 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:21.695 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.695 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.953 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.954 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.210 00:16:22.210 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.210 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.210 15:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.467 { 00:16:22.467 "cntlid": 99, 00:16:22.467 "qid": 0, 00:16:22.467 "state": "enabled", 00:16:22.467 "thread": "nvmf_tgt_poll_group_000", 00:16:22.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:22.467 "listen_address": { 00:16:22.467 "trtype": "TCP", 00:16:22.467 "adrfam": "IPv4", 00:16:22.467 "traddr": "10.0.0.2", 00:16:22.467 "trsvcid": "4420" 00:16:22.467 }, 00:16:22.467 "peer_address": { 00:16:22.467 "trtype": "TCP", 00:16:22.467 "adrfam": "IPv4", 00:16:22.467 "traddr": "10.0.0.1", 00:16:22.467 "trsvcid": "51384" 00:16:22.467 }, 00:16:22.467 "auth": { 00:16:22.467 "state": "completed", 00:16:22.467 "digest": "sha512", 00:16:22.467 "dhgroup": "null" 00:16:22.467 } 00:16:22.467 } 00:16:22.467 ]' 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.467 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.725 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.725 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.725 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.725 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:22.725 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.292 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.555 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.815 00:16:23.815 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.815 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.815 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.074 { 00:16:24.074 "cntlid": 101, 00:16:24.074 "qid": 0, 00:16:24.074 "state": "enabled", 00:16:24.074 "thread": "nvmf_tgt_poll_group_000", 00:16:24.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:24.074 "listen_address": { 00:16:24.074 "trtype": "TCP", 00:16:24.074 "adrfam": "IPv4", 00:16:24.074 "traddr": "10.0.0.2", 00:16:24.074 "trsvcid": "4420" 00:16:24.074 }, 00:16:24.074 "peer_address": { 00:16:24.074 "trtype": "TCP", 00:16:24.074 "adrfam": "IPv4", 00:16:24.074 "traddr": "10.0.0.1", 00:16:24.074 "trsvcid": "35276" 00:16:24.074 }, 00:16:24.074 "auth": { 00:16:24.074 "state": "completed", 00:16:24.074 "digest": "sha512", 00:16:24.074 "dhgroup": "null" 00:16:24.074 } 00:16:24.074 } 00:16:24.074 ]' 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.074 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.333 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.333 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.333 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.333 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:24.333 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.899 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.900 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.900 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.158 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.159 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.159 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.159 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.159 15:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.417 00:16:25.417 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.417 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.417 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.676 { 00:16:25.676 "cntlid": 103, 00:16:25.676 "qid": 0, 00:16:25.676 "state": "enabled", 00:16:25.676 "thread": "nvmf_tgt_poll_group_000", 00:16:25.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:25.676 "listen_address": { 00:16:25.676 "trtype": "TCP", 00:16:25.676 "adrfam": "IPv4", 00:16:25.676 "traddr": "10.0.0.2", 00:16:25.676 "trsvcid": "4420" 00:16:25.676 }, 00:16:25.676 "peer_address": { 00:16:25.676 "trtype": "TCP", 00:16:25.676 "adrfam": "IPv4", 00:16:25.676 "traddr": "10.0.0.1", 00:16:25.676 "trsvcid": "35290" 00:16:25.676 }, 00:16:25.676 "auth": { 00:16:25.676 "state": "completed", 00:16:25.676 "digest": "sha512", 00:16:25.676 "dhgroup": "null" 00:16:25.676 } 00:16:25.676 } 00:16:25.676 ]' 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.676 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.935 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.935 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.935 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.935 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:25.935 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.502 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.761 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.019 00:16:27.019 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.019 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.019 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.277 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.277 { 00:16:27.277 "cntlid": 105, 00:16:27.277 "qid": 0, 00:16:27.277 "state": "enabled", 00:16:27.277 "thread": "nvmf_tgt_poll_group_000", 00:16:27.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:27.277 "listen_address": { 00:16:27.277 "trtype": "TCP", 00:16:27.277 "adrfam": "IPv4", 00:16:27.277 "traddr": "10.0.0.2", 00:16:27.277 "trsvcid": "4420" 00:16:27.277 }, 00:16:27.277 "peer_address": { 00:16:27.277 "trtype": "TCP", 00:16:27.277 "adrfam": "IPv4", 00:16:27.277 "traddr": "10.0.0.1", 00:16:27.277 "trsvcid": "35320" 00:16:27.278 }, 00:16:27.278 "auth": { 00:16:27.278 "state": "completed", 00:16:27.278 "digest": "sha512", 00:16:27.278 "dhgroup": "ffdhe2048" 00:16:27.278 } 00:16:27.278 } 00:16:27.278 ]' 00:16:27.278 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.278 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.278 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.278 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.278 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.278 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.278 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.278 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.536 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:27.536 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.103 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.361 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.620 00:16:28.620 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.620 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.620 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.878 { 00:16:28.878 "cntlid": 107, 00:16:28.878 "qid": 0, 00:16:28.878 "state": "enabled", 00:16:28.878 "thread": "nvmf_tgt_poll_group_000", 00:16:28.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:28.878 "listen_address": { 00:16:28.878 "trtype": "TCP", 00:16:28.878 "adrfam": "IPv4", 00:16:28.878 "traddr": "10.0.0.2", 00:16:28.878 "trsvcid": "4420" 00:16:28.878 }, 00:16:28.878 "peer_address": { 00:16:28.878 "trtype": "TCP", 00:16:28.878 "adrfam": "IPv4", 00:16:28.878 "traddr": "10.0.0.1", 00:16:28.878 "trsvcid": "35342" 00:16:28.878 }, 00:16:28.878 "auth": { 00:16:28.878 "state": "completed", 00:16:28.878 "digest": "sha512", 00:16:28.878 "dhgroup": "ffdhe2048" 00:16:28.878 } 00:16:28.878 } 00:16:28.878 ]' 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.878 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.137 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:29.137 15:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.704 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.962 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.221 00:16:30.221 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.221 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.221 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.480 { 00:16:30.480 "cntlid": 109, 00:16:30.480 "qid": 0, 00:16:30.480 "state": "enabled", 00:16:30.480 "thread": "nvmf_tgt_poll_group_000", 00:16:30.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:30.480 "listen_address": { 00:16:30.480 "trtype": "TCP", 00:16:30.480 "adrfam": "IPv4", 00:16:30.480 "traddr": "10.0.0.2", 00:16:30.480 "trsvcid": "4420" 00:16:30.480 }, 00:16:30.480 "peer_address": { 00:16:30.480 "trtype": "TCP", 00:16:30.480 "adrfam": "IPv4", 00:16:30.480 "traddr": "10.0.0.1", 00:16:30.480 "trsvcid": "35362" 00:16:30.480 }, 00:16:30.480 "auth": { 00:16:30.480 "state": "completed", 00:16:30.480 "digest": "sha512", 00:16:30.480 "dhgroup": "ffdhe2048" 00:16:30.480 } 00:16:30.480 } 00:16:30.480 ]' 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.480 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.738 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:30.738 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.305 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.564 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.823 00:16:31.823 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.823 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.823 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.081 { 00:16:32.081 "cntlid": 111, 00:16:32.081 "qid": 0, 00:16:32.081 "state": "enabled", 00:16:32.081 "thread": "nvmf_tgt_poll_group_000", 00:16:32.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:32.081 "listen_address": { 00:16:32.081 "trtype": "TCP", 00:16:32.081 "adrfam": "IPv4", 00:16:32.081 "traddr": "10.0.0.2", 00:16:32.081 "trsvcid": "4420" 00:16:32.081 }, 00:16:32.081 "peer_address": { 00:16:32.081 "trtype": "TCP", 00:16:32.081 "adrfam": "IPv4", 00:16:32.081 "traddr": "10.0.0.1", 00:16:32.081 "trsvcid": "35396" 00:16:32.081 }, 00:16:32.081 "auth": { 00:16:32.081 "state": "completed", 00:16:32.081 "digest": "sha512", 00:16:32.081 "dhgroup": "ffdhe2048" 00:16:32.081 } 00:16:32.081 } 00:16:32.081 ]' 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.081 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.340 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:32.340 15:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.911 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.169 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.428 00:16:33.428 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.428 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.428 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.686 { 00:16:33.686 "cntlid": 113, 00:16:33.686 "qid": 0, 00:16:33.686 "state": "enabled", 00:16:33.686 "thread": "nvmf_tgt_poll_group_000", 00:16:33.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:33.686 "listen_address": { 00:16:33.686 "trtype": "TCP", 00:16:33.686 "adrfam": "IPv4", 00:16:33.686 "traddr": "10.0.0.2", 00:16:33.686 "trsvcid": "4420" 00:16:33.686 }, 00:16:33.686 "peer_address": { 00:16:33.686 "trtype": "TCP", 00:16:33.686 "adrfam": "IPv4", 00:16:33.686 "traddr": "10.0.0.1", 00:16:33.686 "trsvcid": "44590" 00:16:33.686 }, 00:16:33.686 "auth": { 00:16:33.686 "state": "completed", 00:16:33.686 "digest": "sha512", 00:16:33.686 "dhgroup": "ffdhe3072" 00:16:33.686 } 00:16:33.686 } 00:16:33.686 ]' 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.686 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.945 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:33.945 15:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.511 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.770 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.028 00:16:35.028 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.028 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.028 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.287 { 00:16:35.287 "cntlid": 115, 00:16:35.287 "qid": 0, 00:16:35.287 "state": "enabled", 00:16:35.287 "thread": "nvmf_tgt_poll_group_000", 00:16:35.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:35.287 "listen_address": { 00:16:35.287 "trtype": "TCP", 00:16:35.287 "adrfam": "IPv4", 00:16:35.287 "traddr": "10.0.0.2", 00:16:35.287 "trsvcid": "4420" 00:16:35.287 }, 00:16:35.287 "peer_address": { 00:16:35.287 "trtype": "TCP", 00:16:35.287 "adrfam": "IPv4", 00:16:35.287 "traddr": "10.0.0.1", 00:16:35.287 "trsvcid": "44628" 00:16:35.287 }, 00:16:35.287 "auth": { 00:16:35.287 "state": "completed", 00:16:35.287 "digest": "sha512", 00:16:35.287 "dhgroup": "ffdhe3072" 00:16:35.287 } 00:16:35.287 } 00:16:35.287 ]' 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.287 15:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.287 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.287 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.287 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.287 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.287 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.545 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:35.545 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.111 15:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.371 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.629 00:16:36.629 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.629 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.629 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.888 { 00:16:36.888 "cntlid": 117, 00:16:36.888 "qid": 0, 00:16:36.888 "state": "enabled", 00:16:36.888 "thread": "nvmf_tgt_poll_group_000", 00:16:36.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:36.888 "listen_address": { 00:16:36.888 "trtype": "TCP", 00:16:36.888 "adrfam": "IPv4", 00:16:36.888 "traddr": "10.0.0.2", 00:16:36.888 "trsvcid": "4420" 00:16:36.888 }, 00:16:36.888 "peer_address": { 00:16:36.888 "trtype": "TCP", 00:16:36.888 "adrfam": "IPv4", 00:16:36.888 "traddr": "10.0.0.1", 00:16:36.888 "trsvcid": "44662" 00:16:36.888 }, 00:16:36.888 "auth": { 00:16:36.888 "state": "completed", 00:16:36.888 "digest": "sha512", 00:16:36.888 "dhgroup": "ffdhe3072" 00:16:36.888 } 00:16:36.888 } 00:16:36.888 ]' 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.888 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.147 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:37.147 15:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.714 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.972 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.231 00:16:38.231 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.231 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.231 15:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.489 { 00:16:38.489 "cntlid": 119, 00:16:38.489 "qid": 0, 00:16:38.489 "state": "enabled", 00:16:38.489 "thread": "nvmf_tgt_poll_group_000", 00:16:38.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:38.489 "listen_address": { 00:16:38.489 "trtype": "TCP", 00:16:38.489 "adrfam": "IPv4", 00:16:38.489 "traddr": "10.0.0.2", 00:16:38.489 "trsvcid": "4420" 00:16:38.489 }, 00:16:38.489 "peer_address": { 00:16:38.489 "trtype": "TCP", 00:16:38.489 "adrfam": "IPv4", 00:16:38.489 "traddr": "10.0.0.1", 00:16:38.489 "trsvcid": "44692" 00:16:38.489 }, 00:16:38.489 "auth": { 00:16:38.489 "state": "completed", 00:16:38.489 "digest": "sha512", 00:16:38.489 "dhgroup": "ffdhe3072" 00:16:38.489 } 00:16:38.489 } 00:16:38.489 ]' 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.489 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.747 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:38.747 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:39.315 15:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.315 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.573 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.832 00:16:39.832 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.832 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.832 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.091 { 00:16:40.091 "cntlid": 121, 00:16:40.091 "qid": 0, 00:16:40.091 "state": "enabled", 00:16:40.091 "thread": "nvmf_tgt_poll_group_000", 00:16:40.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:40.091 "listen_address": { 00:16:40.091 "trtype": "TCP", 00:16:40.091 "adrfam": "IPv4", 00:16:40.091 "traddr": "10.0.0.2", 00:16:40.091 "trsvcid": "4420" 00:16:40.091 }, 00:16:40.091 "peer_address": { 00:16:40.091 "trtype": "TCP", 00:16:40.091 "adrfam": "IPv4", 00:16:40.091 "traddr": "10.0.0.1", 00:16:40.091 "trsvcid": "44726" 00:16:40.091 }, 00:16:40.091 "auth": { 00:16:40.091 "state": "completed", 00:16:40.091 "digest": "sha512", 00:16:40.091 "dhgroup": "ffdhe4096" 00:16:40.091 } 00:16:40.091 } 00:16:40.091 ]' 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.091 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.350 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:40.350 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:40.917 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.176 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.434 00:16:41.434 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.434 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.434 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.693 { 00:16:41.693 "cntlid": 123, 00:16:41.693 "qid": 0, 00:16:41.693 "state": "enabled", 00:16:41.693 "thread": "nvmf_tgt_poll_group_000", 00:16:41.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:41.693 "listen_address": { 00:16:41.693 "trtype": "TCP", 00:16:41.693 "adrfam": "IPv4", 00:16:41.693 "traddr": "10.0.0.2", 00:16:41.693 "trsvcid": "4420" 00:16:41.693 }, 00:16:41.693 "peer_address": { 00:16:41.693 "trtype": "TCP", 00:16:41.693 "adrfam": "IPv4", 00:16:41.693 "traddr": "10.0.0.1", 00:16:41.693 "trsvcid": "44740" 00:16:41.693 }, 00:16:41.693 "auth": { 00:16:41.693 "state": "completed", 00:16:41.693 "digest": "sha512", 00:16:41.693 "dhgroup": "ffdhe4096" 00:16:41.693 } 00:16:41.693 } 00:16:41.693 ]' 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.693 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.952 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:41.952 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.519 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.778 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.036 00:16:43.036 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.036 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.036 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.295 { 00:16:43.295 "cntlid": 125, 00:16:43.295 "qid": 0, 00:16:43.295 "state": "enabled", 00:16:43.295 "thread": "nvmf_tgt_poll_group_000", 00:16:43.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:43.295 "listen_address": { 00:16:43.295 "trtype": "TCP", 00:16:43.295 "adrfam": "IPv4", 00:16:43.295 "traddr": "10.0.0.2", 00:16:43.295 "trsvcid": "4420" 00:16:43.295 }, 00:16:43.295 "peer_address": { 00:16:43.295 "trtype": "TCP", 00:16:43.295 "adrfam": "IPv4", 00:16:43.295 "traddr": "10.0.0.1", 00:16:43.295 "trsvcid": "49828" 00:16:43.295 }, 00:16:43.295 "auth": { 00:16:43.295 "state": "completed", 00:16:43.295 "digest": "sha512", 00:16:43.295 "dhgroup": "ffdhe4096" 00:16:43.295 } 00:16:43.295 } 00:16:43.295 ]' 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.295 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.295 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.295 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.295 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.295 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.295 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.554 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:43.554 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.120 15:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.378 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.636 00:16:44.636 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.636 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.636 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.894 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.895 { 00:16:44.895 "cntlid": 127, 00:16:44.895 "qid": 0, 00:16:44.895 "state": "enabled", 00:16:44.895 "thread": "nvmf_tgt_poll_group_000", 00:16:44.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:44.895 "listen_address": { 00:16:44.895 "trtype": "TCP", 00:16:44.895 "adrfam": "IPv4", 00:16:44.895 "traddr": "10.0.0.2", 00:16:44.895 "trsvcid": "4420" 00:16:44.895 }, 00:16:44.895 "peer_address": { 00:16:44.895 "trtype": "TCP", 00:16:44.895 "adrfam": "IPv4", 00:16:44.895 "traddr": "10.0.0.1", 00:16:44.895 "trsvcid": "49866" 00:16:44.895 }, 00:16:44.895 "auth": { 00:16:44.895 "state": "completed", 00:16:44.895 "digest": "sha512", 00:16:44.895 "dhgroup": "ffdhe4096" 00:16:44.895 } 00:16:44.895 } 00:16:44.895 ]' 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.895 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.153 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.153 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.154 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.154 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:45.154 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.718 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.976 15:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.543 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.543 { 00:16:46.543 "cntlid": 129, 00:16:46.543 "qid": 0, 00:16:46.543 "state": "enabled", 00:16:46.543 "thread": "nvmf_tgt_poll_group_000", 00:16:46.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:46.543 "listen_address": { 00:16:46.543 "trtype": "TCP", 00:16:46.543 "adrfam": "IPv4", 00:16:46.543 "traddr": "10.0.0.2", 00:16:46.543 "trsvcid": "4420" 00:16:46.543 }, 00:16:46.543 "peer_address": { 00:16:46.543 "trtype": "TCP", 00:16:46.543 "adrfam": "IPv4", 00:16:46.543 "traddr": "10.0.0.1", 00:16:46.543 "trsvcid": "49896" 00:16:46.543 }, 00:16:46.543 "auth": { 00:16:46.543 "state": "completed", 00:16:46.543 "digest": "sha512", 00:16:46.543 "dhgroup": "ffdhe6144" 00:16:46.543 } 00:16:46.543 } 00:16:46.543 ]' 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.543 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:46.802 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.369 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.628 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.195 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.195 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.195 { 00:16:48.195 "cntlid": 131, 00:16:48.195 "qid": 0, 00:16:48.195 "state": "enabled", 00:16:48.195 "thread": "nvmf_tgt_poll_group_000", 00:16:48.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:48.195 "listen_address": { 00:16:48.195 "trtype": "TCP", 00:16:48.195 "adrfam": "IPv4", 00:16:48.195 "traddr": "10.0.0.2", 00:16:48.195 "trsvcid": "4420" 00:16:48.195 }, 00:16:48.195 "peer_address": { 00:16:48.195 "trtype": "TCP", 00:16:48.195 "adrfam": "IPv4", 00:16:48.195 "traddr": "10.0.0.1", 00:16:48.195 "trsvcid": "49922" 00:16:48.195 }, 00:16:48.195 "auth": { 00:16:48.195 "state": "completed", 00:16:48.195 "digest": "sha512", 00:16:48.195 "dhgroup": "ffdhe6144" 00:16:48.195 } 00:16:48.195 } 00:16:48.195 ]' 00:16:48.196 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.196 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.196 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.454 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.454 15:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.454 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.454 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.454 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.712 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:48.712 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.280 15:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.280 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.847 00:16:49.847 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.847 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.847 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.848 { 00:16:49.848 "cntlid": 133, 00:16:49.848 "qid": 0, 00:16:49.848 "state": "enabled", 00:16:49.848 "thread": "nvmf_tgt_poll_group_000", 00:16:49.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:49.848 "listen_address": { 00:16:49.848 "trtype": "TCP", 00:16:49.848 "adrfam": "IPv4", 00:16:49.848 "traddr": "10.0.0.2", 00:16:49.848 "trsvcid": "4420" 00:16:49.848 }, 00:16:49.848 "peer_address": { 00:16:49.848 "trtype": "TCP", 00:16:49.848 "adrfam": "IPv4", 00:16:49.848 "traddr": "10.0.0.1", 00:16:49.848 "trsvcid": "49948" 00:16:49.848 }, 00:16:49.848 "auth": { 00:16:49.848 "state": "completed", 00:16:49.848 "digest": "sha512", 00:16:49.848 "dhgroup": "ffdhe6144" 00:16:49.848 } 00:16:49.848 } 00:16:49.848 ]' 00:16:49.848 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.106 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.364 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:50.365 15:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.932 15:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.499 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.499 { 00:16:51.499 "cntlid": 135, 00:16:51.499 "qid": 0, 00:16:51.499 "state": "enabled", 00:16:51.499 "thread": "nvmf_tgt_poll_group_000", 00:16:51.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:51.499 "listen_address": { 00:16:51.499 "trtype": "TCP", 00:16:51.499 "adrfam": "IPv4", 00:16:51.499 "traddr": "10.0.0.2", 00:16:51.499 "trsvcid": "4420" 00:16:51.499 }, 00:16:51.499 "peer_address": { 00:16:51.499 "trtype": "TCP", 00:16:51.499 "adrfam": "IPv4", 00:16:51.499 "traddr": "10.0.0.1", 00:16:51.499 "trsvcid": "49988" 00:16:51.499 }, 00:16:51.499 "auth": { 00:16:51.499 "state": "completed", 00:16:51.499 "digest": "sha512", 00:16:51.499 "dhgroup": "ffdhe6144" 00:16:51.499 } 00:16:51.499 } 00:16:51.499 ]' 00:16:51.499 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.758 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.016 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:52.016 15:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.583 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.842 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.101 00:16:53.101 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.101 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.101 15:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.359 { 00:16:53.359 "cntlid": 137, 00:16:53.359 "qid": 0, 00:16:53.359 "state": "enabled", 00:16:53.359 "thread": "nvmf_tgt_poll_group_000", 00:16:53.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:53.359 "listen_address": { 00:16:53.359 "trtype": "TCP", 00:16:53.359 "adrfam": "IPv4", 00:16:53.359 "traddr": "10.0.0.2", 00:16:53.359 "trsvcid": "4420" 00:16:53.359 }, 00:16:53.359 "peer_address": { 00:16:53.359 "trtype": "TCP", 00:16:53.359 "adrfam": "IPv4", 00:16:53.359 "traddr": "10.0.0.1", 00:16:53.359 "trsvcid": "53350" 00:16:53.359 }, 00:16:53.359 "auth": { 00:16:53.359 "state": "completed", 00:16:53.359 "digest": "sha512", 00:16:53.359 "dhgroup": "ffdhe8192" 00:16:53.359 } 00:16:53.359 } 00:16:53.359 ]' 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.359 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.617 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.617 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.617 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.617 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.617 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.874 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:53.874 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.440 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.440 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.007 00:16:55.007 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.007 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.007 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.268 { 00:16:55.268 "cntlid": 139, 00:16:55.268 "qid": 0, 00:16:55.268 "state": "enabled", 00:16:55.268 "thread": "nvmf_tgt_poll_group_000", 00:16:55.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:55.268 "listen_address": { 00:16:55.268 "trtype": "TCP", 00:16:55.268 "adrfam": "IPv4", 00:16:55.268 "traddr": "10.0.0.2", 00:16:55.268 "trsvcid": "4420" 00:16:55.268 }, 00:16:55.268 "peer_address": { 00:16:55.268 "trtype": "TCP", 00:16:55.268 "adrfam": "IPv4", 00:16:55.268 "traddr": "10.0.0.1", 00:16:55.268 "trsvcid": "53374" 00:16:55.268 }, 00:16:55.268 "auth": { 00:16:55.268 "state": "completed", 00:16:55.268 "digest": "sha512", 00:16:55.268 "dhgroup": "ffdhe8192" 00:16:55.268 } 00:16:55.268 } 00:16:55.268 ]' 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.268 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.268 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.268 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.268 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.563 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:55.563 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: --dhchap-ctrl-secret DHHC-1:02:ZjVhYzc3NTM0NzZkM2MxMjgzNmRhZWFkMTZmMDI5NmQ1OTY0ZjI1MjBjZGQ2OGIxq/MZlQ==: 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.162 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.421 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.989 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.989 { 00:16:56.989 "cntlid": 141, 00:16:56.989 "qid": 0, 00:16:56.989 "state": "enabled", 00:16:56.989 "thread": "nvmf_tgt_poll_group_000", 00:16:56.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:56.989 "listen_address": { 00:16:56.989 "trtype": "TCP", 00:16:56.989 "adrfam": "IPv4", 00:16:56.989 "traddr": "10.0.0.2", 00:16:56.989 "trsvcid": "4420" 00:16:56.989 }, 00:16:56.989 "peer_address": { 00:16:56.989 "trtype": "TCP", 00:16:56.989 "adrfam": "IPv4", 00:16:56.989 "traddr": "10.0.0.1", 00:16:56.989 "trsvcid": "53384" 00:16:56.989 }, 00:16:56.989 "auth": { 00:16:56.989 "state": "completed", 00:16:56.989 "digest": "sha512", 00:16:56.989 "dhgroup": "ffdhe8192" 00:16:56.989 } 00:16:56.989 } 00:16:56.989 ]' 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.989 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.247 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.247 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.247 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.247 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.247 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.247 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:57.247 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:01:MjI2OGQ5ZTRiM2NkMDUzNDgyY2Q4YWI4MTUyMWVlYjhbrnKO: 00:16:57.814 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.814 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:57.814 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.814 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.072 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.639 00:16:58.639 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.639 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.639 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.897 { 00:16:58.897 "cntlid": 143, 00:16:58.897 "qid": 0, 00:16:58.897 "state": "enabled", 00:16:58.897 "thread": "nvmf_tgt_poll_group_000", 00:16:58.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:58.897 "listen_address": { 00:16:58.897 "trtype": "TCP", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "10.0.0.2", 00:16:58.897 "trsvcid": "4420" 00:16:58.897 }, 00:16:58.897 "peer_address": { 00:16:58.897 "trtype": "TCP", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "10.0.0.1", 00:16:58.897 "trsvcid": "53412" 00:16:58.897 }, 00:16:58.897 "auth": { 00:16:58.897 "state": "completed", 00:16:58.897 "digest": "sha512", 00:16:58.897 "dhgroup": "ffdhe8192" 00:16:58.897 } 00:16:58.897 } 00:16:58.897 ]' 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.897 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.156 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:59.156 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.723 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.981 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.548 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.548 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.807 { 00:17:00.807 "cntlid": 145, 00:17:00.807 "qid": 0, 00:17:00.807 "state": "enabled", 00:17:00.807 "thread": "nvmf_tgt_poll_group_000", 00:17:00.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:00.807 "listen_address": { 00:17:00.807 "trtype": "TCP", 00:17:00.807 "adrfam": "IPv4", 00:17:00.807 "traddr": "10.0.0.2", 00:17:00.807 "trsvcid": "4420" 00:17:00.807 }, 00:17:00.807 "peer_address": { 00:17:00.807 "trtype": "TCP", 00:17:00.807 "adrfam": "IPv4", 00:17:00.807 "traddr": "10.0.0.1", 00:17:00.807 "trsvcid": "53428" 00:17:00.807 }, 00:17:00.807 "auth": { 00:17:00.807 "state": "completed", 00:17:00.807 "digest": "sha512", 00:17:00.807 "dhgroup": "ffdhe8192" 00:17:00.807 } 00:17:00.807 } 00:17:00.807 ]' 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.807 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.065 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:17:01.065 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDBmNjY2NWUwNmM0OGYzZjM3NmE1NjBiMzA2NzQ1MWI2MmYwOGU4YjIxZDU3OTc2Db2K9A==: --dhchap-ctrl-secret DHHC-1:03:NWQzYjk3MTQ0Yjk3NTdhYWVmYzliNjE3MjgyMGFkM2ZkNDQ4MjM3ZmRjYjgzODcxMDY3YjJjODUwYTAzMzNjMd4NBdI=: 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:01.632 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:01.890 request: 00:17:01.890 { 00:17:01.890 "name": "nvme0", 00:17:01.890 "trtype": "tcp", 00:17:01.890 "traddr": "10.0.0.2", 00:17:01.890 "adrfam": "ipv4", 00:17:01.890 "trsvcid": "4420", 00:17:01.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:01.890 "prchk_reftag": false, 00:17:01.890 "prchk_guard": false, 00:17:01.890 "hdgst": false, 00:17:01.890 "ddgst": false, 00:17:01.890 "dhchap_key": "key2", 00:17:01.890 "allow_unrecognized_csi": false, 00:17:01.890 "method": "bdev_nvme_attach_controller", 00:17:01.890 "req_id": 1 00:17:01.890 } 00:17:01.890 Got JSON-RPC error response 00:17:01.890 response: 00:17:01.890 { 00:17:01.890 "code": -5, 00:17:01.890 "message": "Input/output error" 00:17:01.890 } 00:17:02.148 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.148 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.148 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.149 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.407 request: 00:17:02.407 { 00:17:02.407 "name": "nvme0", 00:17:02.407 "trtype": "tcp", 00:17:02.407 "traddr": "10.0.0.2", 00:17:02.407 "adrfam": "ipv4", 00:17:02.407 "trsvcid": "4420", 00:17:02.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:02.407 "prchk_reftag": false, 00:17:02.407 "prchk_guard": false, 00:17:02.407 "hdgst": false, 00:17:02.407 "ddgst": false, 00:17:02.408 "dhchap_key": "key1", 00:17:02.408 "dhchap_ctrlr_key": "ckey2", 00:17:02.408 "allow_unrecognized_csi": false, 00:17:02.408 "method": "bdev_nvme_attach_controller", 00:17:02.408 "req_id": 1 00:17:02.408 } 00:17:02.408 Got JSON-RPC error response 00:17:02.408 response: 00:17:02.408 { 00:17:02.408 "code": -5, 00:17:02.408 "message": "Input/output error" 00:17:02.408 } 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.408 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.976 request: 00:17:02.976 { 00:17:02.976 "name": "nvme0", 00:17:02.976 "trtype": "tcp", 00:17:02.976 "traddr": "10.0.0.2", 00:17:02.976 "adrfam": "ipv4", 00:17:02.976 "trsvcid": "4420", 00:17:02.976 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:02.976 "prchk_reftag": false, 00:17:02.976 "prchk_guard": false, 00:17:02.976 "hdgst": false, 00:17:02.976 "ddgst": false, 00:17:02.976 "dhchap_key": "key1", 00:17:02.976 "dhchap_ctrlr_key": "ckey1", 00:17:02.976 "allow_unrecognized_csi": false, 00:17:02.976 "method": "bdev_nvme_attach_controller", 00:17:02.976 "req_id": 1 00:17:02.976 } 00:17:02.976 Got JSON-RPC error response 00:17:02.976 response: 00:17:02.976 { 00:17:02.976 "code": -5, 00:17:02.976 "message": "Input/output error" 00:17:02.976 } 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1408457 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1408457 ']' 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1408457 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1408457 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1408457' 00:17:02.976 killing process with pid 1408457 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1408457 00:17:02.976 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1408457 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1430572 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1430572 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1430572 ']' 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.235 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1430572 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1430572 ']' 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.495 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 null0 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Kv 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.DaT ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DaT 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.F9M 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.fOi ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fOi 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2dv 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.n0n ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0n 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Gaw 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.753 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.754 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.689 nvme0n1 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.689 { 00:17:04.689 "cntlid": 1, 00:17:04.689 "qid": 0, 00:17:04.689 "state": "enabled", 00:17:04.689 "thread": "nvmf_tgt_poll_group_000", 00:17:04.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:04.689 "listen_address": { 00:17:04.689 "trtype": "TCP", 00:17:04.689 "adrfam": "IPv4", 00:17:04.689 "traddr": "10.0.0.2", 00:17:04.689 "trsvcid": "4420" 00:17:04.689 }, 00:17:04.689 "peer_address": { 00:17:04.689 "trtype": "TCP", 00:17:04.689 "adrfam": "IPv4", 00:17:04.689 "traddr": "10.0.0.1", 00:17:04.689 "trsvcid": "48478" 00:17:04.689 }, 00:17:04.689 "auth": { 00:17:04.689 "state": "completed", 00:17:04.689 "digest": "sha512", 00:17:04.689 "dhgroup": "ffdhe8192" 00:17:04.689 } 00:17:04.689 } 00:17:04.689 ]' 00:17:04.689 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.948 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.211 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:17:05.211 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:05.779 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.037 request: 00:17:06.037 { 00:17:06.037 "name": "nvme0", 00:17:06.037 "trtype": "tcp", 00:17:06.037 "traddr": "10.0.0.2", 00:17:06.037 "adrfam": "ipv4", 00:17:06.037 "trsvcid": "4420", 00:17:06.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:06.037 "prchk_reftag": false, 00:17:06.037 "prchk_guard": false, 00:17:06.037 "hdgst": false, 00:17:06.037 "ddgst": false, 00:17:06.037 "dhchap_key": "key3", 00:17:06.037 "allow_unrecognized_csi": false, 00:17:06.037 "method": "bdev_nvme_attach_controller", 00:17:06.037 "req_id": 1 00:17:06.037 } 00:17:06.037 Got JSON-RPC error response 00:17:06.037 response: 00:17:06.037 { 00:17:06.037 "code": -5, 00:17:06.037 "message": "Input/output error" 00:17:06.037 } 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.037 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.296 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.296 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.554 request: 00:17:06.554 { 00:17:06.554 "name": "nvme0", 00:17:06.554 "trtype": "tcp", 00:17:06.554 "traddr": "10.0.0.2", 00:17:06.554 "adrfam": "ipv4", 00:17:06.554 "trsvcid": "4420", 00:17:06.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:06.554 "prchk_reftag": false, 00:17:06.554 "prchk_guard": false, 00:17:06.554 "hdgst": false, 00:17:06.554 "ddgst": false, 00:17:06.554 "dhchap_key": "key3", 00:17:06.554 "allow_unrecognized_csi": false, 00:17:06.554 "method": "bdev_nvme_attach_controller", 00:17:06.554 "req_id": 1 00:17:06.554 } 00:17:06.554 Got JSON-RPC error response 00:17:06.554 response: 00:17:06.554 { 00:17:06.554 "code": -5, 00:17:06.554 "message": "Input/output error" 00:17:06.554 } 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.554 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.813 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.072 request: 00:17:07.072 { 00:17:07.072 "name": "nvme0", 00:17:07.072 "trtype": "tcp", 00:17:07.072 "traddr": "10.0.0.2", 00:17:07.072 "adrfam": "ipv4", 00:17:07.072 "trsvcid": "4420", 00:17:07.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:07.072 "prchk_reftag": false, 00:17:07.072 "prchk_guard": false, 00:17:07.072 "hdgst": false, 00:17:07.072 "ddgst": false, 00:17:07.072 "dhchap_key": "key0", 00:17:07.072 "dhchap_ctrlr_key": "key1", 00:17:07.072 "allow_unrecognized_csi": false, 00:17:07.072 "method": "bdev_nvme_attach_controller", 00:17:07.072 "req_id": 1 00:17:07.072 } 00:17:07.072 Got JSON-RPC error response 00:17:07.072 response: 00:17:07.072 { 00:17:07.072 "code": -5, 00:17:07.072 "message": "Input/output error" 00:17:07.072 } 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:07.072 15:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:07.330 nvme0n1 00:17:07.330 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:07.330 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.330 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:07.589 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.589 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.589 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:07.848 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:08.415 nvme0n1 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:08.673 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.932 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.932 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:17:08.932 15:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: --dhchap-ctrl-secret DHHC-1:03:MjQxNmY4MWEzZWY4OTRlYTBkNzE5YjQyZDY3ZmFhN2RlZTEwZWUxMjI3M2U1NWM4NzE4NGNjNWMyY2YyMzk2MMQGiaU=: 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.500 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:09.759 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.327 request: 00:17:10.327 { 00:17:10.327 "name": "nvme0", 00:17:10.327 "trtype": "tcp", 00:17:10.327 "traddr": "10.0.0.2", 00:17:10.327 "adrfam": "ipv4", 00:17:10.327 "trsvcid": "4420", 00:17:10.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:10.327 "prchk_reftag": false, 00:17:10.327 "prchk_guard": false, 00:17:10.327 "hdgst": false, 00:17:10.327 "ddgst": false, 00:17:10.327 "dhchap_key": "key1", 00:17:10.327 "allow_unrecognized_csi": false, 00:17:10.327 "method": "bdev_nvme_attach_controller", 00:17:10.327 "req_id": 1 00:17:10.327 } 00:17:10.327 Got JSON-RPC error response 00:17:10.327 response: 00:17:10.327 { 00:17:10.327 "code": -5, 00:17:10.327 "message": "Input/output error" 00:17:10.327 } 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.327 15:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.895 nvme0n1 00:17:10.895 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:10.895 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:10.895 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.154 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.154 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.154 15:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.412 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.413 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:11.671 nvme0n1 00:17:11.671 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:11.671 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:11.671 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: '' 2s 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: ]] 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTJjMjI2NWY4ZTY5MTYzYTRkOWQxZTcxMzc4YjNhMGG0TqRQ: 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:11.930 15:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: 2s 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: ]] 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTQ1ODE1MDIzZWRiOGJiMTEyZjIzODAyNzYzZTMxM2NkNjQ1OWQ2YWU5ZmRmOTMx4DLhKw==: 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.463 15:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.366 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:16.934 nvme0n1 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.934 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:17.502 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:17.760 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:17.760 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:17.760 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.019 15:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.587 request: 00:17:18.587 { 00:17:18.587 "name": "nvme0", 00:17:18.587 "dhchap_key": "key1", 00:17:18.587 "dhchap_ctrlr_key": "key3", 00:17:18.587 "method": "bdev_nvme_set_keys", 00:17:18.587 "req_id": 1 00:17:18.587 } 00:17:18.587 Got JSON-RPC error response 00:17:18.587 response: 00:17:18.587 { 00:17:18.587 "code": -13, 00:17:18.587 "message": "Permission denied" 00:17:18.587 } 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:18.587 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.963 15:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.531 nvme0n1 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:20.531 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:21.099 request: 00:17:21.099 { 00:17:21.099 "name": "nvme0", 00:17:21.099 "dhchap_key": "key2", 00:17:21.099 "dhchap_ctrlr_key": "key0", 00:17:21.099 "method": "bdev_nvme_set_keys", 00:17:21.099 "req_id": 1 00:17:21.099 } 00:17:21.099 Got JSON-RPC error response 00:17:21.099 response: 00:17:21.099 { 00:17:21.099 "code": -13, 00:17:21.099 "message": "Permission denied" 00:17:21.099 } 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:21.099 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.358 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:21.358 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:22.293 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:22.293 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:22.293 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1408626 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1408626 ']' 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1408626 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1408626 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1408626' 00:17:22.552 killing process with pid 1408626 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1408626 00:17:22.552 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1408626 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.811 rmmod nvme_tcp 00:17:22.811 rmmod nvme_fabrics 00:17:22.811 rmmod nvme_keyring 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1430572 ']' 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1430572 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1430572 ']' 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1430572 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.811 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1430572 00:17:23.070 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.070 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.070 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1430572' 00:17:23.070 killing process with pid 1430572 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1430572 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1430572 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.071 15:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.0Kv /tmp/spdk.key-sha256.F9M /tmp/spdk.key-sha384.2dv /tmp/spdk.key-sha512.Gaw /tmp/spdk.key-sha512.DaT /tmp/spdk.key-sha384.fOi /tmp/spdk.key-sha256.n0n '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:25.606 00:17:25.606 real 2m32.752s 00:17:25.606 user 5m52.101s 00:17:25.606 sys 0m24.189s 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.606 ************************************ 00:17:25.606 END TEST nvmf_auth_target 00:17:25.606 ************************************ 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.606 ************************************ 00:17:25.606 START TEST nvmf_bdevio_no_huge 00:17:25.606 ************************************ 00:17:25.606 15:10:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:25.606 * Looking for test storage... 00:17:25.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.606 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.607 --rc genhtml_branch_coverage=1 00:17:25.607 --rc genhtml_function_coverage=1 00:17:25.607 --rc genhtml_legend=1 00:17:25.607 --rc geninfo_all_blocks=1 00:17:25.607 --rc geninfo_unexecuted_blocks=1 00:17:25.607 00:17:25.607 ' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.607 --rc genhtml_branch_coverage=1 00:17:25.607 --rc genhtml_function_coverage=1 00:17:25.607 --rc genhtml_legend=1 00:17:25.607 --rc geninfo_all_blocks=1 00:17:25.607 --rc geninfo_unexecuted_blocks=1 00:17:25.607 00:17:25.607 ' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.607 --rc genhtml_branch_coverage=1 00:17:25.607 --rc genhtml_function_coverage=1 00:17:25.607 --rc genhtml_legend=1 00:17:25.607 --rc geninfo_all_blocks=1 00:17:25.607 --rc geninfo_unexecuted_blocks=1 00:17:25.607 00:17:25.607 ' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.607 --rc genhtml_branch_coverage=1 00:17:25.607 --rc genhtml_function_coverage=1 00:17:25.607 --rc genhtml_legend=1 00:17:25.607 --rc geninfo_all_blocks=1 00:17:25.607 --rc geninfo_unexecuted_blocks=1 00:17:25.607 00:17:25.607 ' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.607 15:10:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.178 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.178 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.178 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.179 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.179 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.179 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:17:32.179 00:17:32.179 --- 10.0.0.2 ping statistics --- 00:17:32.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.179 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:17:32.179 00:17:32.179 --- 10.0.0.1 ping statistics --- 00:17:32.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.179 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1437424 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1437424 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1437424 ']' 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.179 15:10:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.179 [2024-12-09 15:10:33.206416] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:32.179 [2024-12-09 15:10:33.206462] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:32.179 [2024-12-09 15:10:33.287529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.179 [2024-12-09 15:10:33.331900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.179 [2024-12-09 15:10:33.331931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.179 [2024-12-09 15:10:33.331938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.179 [2024-12-09 15:10:33.331943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.179 [2024-12-09 15:10:33.331948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.179 [2024-12-09 15:10:33.333051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:32.179 [2024-12-09 15:10:33.333158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:32.179 [2024-12-09 15:10:33.333278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.179 [2024-12-09 15:10:33.333279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 [2024-12-09 15:10:34.070734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 Malloc0 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.438 [2024-12-09 15:10:34.115017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:32.438 { 00:17:32.438 "params": { 00:17:32.438 "name": "Nvme$subsystem", 00:17:32.438 "trtype": "$TEST_TRANSPORT", 00:17:32.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:32.438 "adrfam": "ipv4", 00:17:32.438 "trsvcid": "$NVMF_PORT", 00:17:32.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:32.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:32.438 "hdgst": ${hdgst:-false}, 00:17:32.438 "ddgst": ${ddgst:-false} 00:17:32.438 }, 00:17:32.438 "method": "bdev_nvme_attach_controller" 00:17:32.438 } 00:17:32.438 EOF 00:17:32.438 )") 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:32.438 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:32.438 "params": { 00:17:32.438 "name": "Nvme1", 00:17:32.438 "trtype": "tcp", 00:17:32.438 "traddr": "10.0.0.2", 00:17:32.438 "adrfam": "ipv4", 00:17:32.438 "trsvcid": "4420", 00:17:32.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.438 "hdgst": false, 00:17:32.438 "ddgst": false 00:17:32.438 }, 00:17:32.438 "method": "bdev_nvme_attach_controller" 00:17:32.438 }' 00:17:32.438 [2024-12-09 15:10:34.180955] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:32.438 [2024-12-09 15:10:34.181009] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1437475 ] 00:17:32.696 [2024-12-09 15:10:34.261382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:32.696 [2024-12-09 15:10:34.308670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.696 [2024-12-09 15:10:34.308699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.696 [2024-12-09 15:10:34.308699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.954 I/O targets: 00:17:32.954 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:32.954 00:17:32.954 00:17:32.954 CUnit - A unit testing framework for C - Version 2.1-3 00:17:32.954 http://cunit.sourceforge.net/ 00:17:32.954 00:17:32.954 00:17:32.954 Suite: bdevio tests on: Nvme1n1 00:17:32.954 Test: blockdev write read block ...passed 00:17:32.954 Test: blockdev write zeroes read block ...passed 00:17:32.954 Test: blockdev write zeroes read no split ...passed 00:17:32.954 Test: blockdev write zeroes read split ...passed 00:17:32.954 Test: blockdev write zeroes read split partial ...passed 00:17:32.954 Test: blockdev reset ...[2024-12-09 15:10:34.640022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:32.954 [2024-12-09 15:10:34.640086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272ef0 (9): Bad file descriptor 00:17:32.954 [2024-12-09 15:10:34.652967] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:32.954 passed 00:17:32.954 Test: blockdev write read 8 blocks ...passed 00:17:32.954 Test: blockdev write read size > 128k ...passed 00:17:32.954 Test: blockdev write read invalid size ...passed 00:17:33.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:33.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:33.212 Test: blockdev write read max offset ...passed 00:17:33.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:33.212 Test: blockdev writev readv 8 blocks ...passed 00:17:33.212 Test: blockdev writev readv 30 x 1block ...passed 00:17:33.212 Test: blockdev writev readv block ...passed 00:17:33.212 Test: blockdev writev readv size > 128k ...passed 00:17:33.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:33.212 Test: blockdev comparev and writev ...[2024-12-09 15:10:34.947898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.947926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.947940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.947948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:33.212 [2024-12-09 15:10:34.948751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:33.212 [2024-12-09 15:10:34.948759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:33.212 passed 00:17:33.469 Test: blockdev nvme passthru rw ...passed 00:17:33.469 Test: blockdev nvme passthru vendor specific ...[2024-12-09 15:10:35.030554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.469 [2024-12-09 15:10:35.030571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:33.469 [2024-12-09 15:10:35.030678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.469 [2024-12-09 15:10:35.030688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:33.469 [2024-12-09 15:10:35.030789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.469 [2024-12-09 15:10:35.030802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:33.469 [2024-12-09 15:10:35.030902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.469 [2024-12-09 15:10:35.030912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:33.469 passed 00:17:33.469 Test: blockdev nvme admin passthru ...passed 00:17:33.469 Test: blockdev copy ...passed 00:17:33.469 00:17:33.469 Run Summary: Type Total Ran Passed Failed Inactive 00:17:33.469 suites 1 1 n/a 0 0 00:17:33.469 tests 23 23 23 0 0 00:17:33.469 asserts 152 152 152 0 n/a 00:17:33.469 00:17:33.469 Elapsed time = 1.148 seconds 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.727 rmmod nvme_tcp 00:17:33.727 rmmod nvme_fabrics 00:17:33.727 rmmod nvme_keyring 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1437424 ']' 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1437424 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1437424 ']' 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1437424 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1437424 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1437424' 00:17:33.727 killing process with pid 1437424 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1437424 00:17:33.727 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1437424 00:17:33.986 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.986 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.986 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.986 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.245 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.154 00:17:36.154 real 0m10.916s 00:17:36.154 user 0m13.465s 00:17:36.154 sys 0m5.355s 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.154 ************************************ 00:17:36.154 END TEST nvmf_bdevio_no_huge 00:17:36.154 ************************************ 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.154 ************************************ 00:17:36.154 START TEST nvmf_tls 00:17:36.154 ************************************ 00:17:36.154 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:36.414 * Looking for test storage... 00:17:36.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.414 --rc genhtml_branch_coverage=1 00:17:36.414 --rc genhtml_function_coverage=1 00:17:36.414 --rc genhtml_legend=1 00:17:36.414 --rc geninfo_all_blocks=1 00:17:36.414 --rc geninfo_unexecuted_blocks=1 00:17:36.414 00:17:36.414 ' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.414 --rc genhtml_branch_coverage=1 00:17:36.414 --rc genhtml_function_coverage=1 00:17:36.414 --rc genhtml_legend=1 00:17:36.414 --rc geninfo_all_blocks=1 00:17:36.414 --rc geninfo_unexecuted_blocks=1 00:17:36.414 00:17:36.414 ' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.414 --rc genhtml_branch_coverage=1 00:17:36.414 --rc genhtml_function_coverage=1 00:17:36.414 --rc genhtml_legend=1 00:17:36.414 --rc geninfo_all_blocks=1 00:17:36.414 --rc geninfo_unexecuted_blocks=1 00:17:36.414 00:17:36.414 ' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.414 --rc genhtml_branch_coverage=1 00:17:36.414 --rc genhtml_function_coverage=1 00:17:36.414 --rc genhtml_legend=1 00:17:36.414 --rc geninfo_all_blocks=1 00:17:36.414 --rc geninfo_unexecuted_blocks=1 00:17:36.414 00:17:36.414 ' 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.414 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.415 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:42.990 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:42.990 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:42.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:42.991 Found net devices under 0000:af:00.0: cvl_0_0 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:42.991 Found net devices under 0000:af:00.1: cvl_0_1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:42.991 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:42.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:17:42.991 00:17:42.991 --- 10.0.0.2 ping statistics --- 00:17:42.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.991 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:17:42.991 00:17:42.991 --- 10.0.0.1 ping statistics --- 00:17:42.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.991 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1441251 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1441251 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1441251 ']' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.991 [2024-12-09 15:10:44.222238] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:42.991 [2024-12-09 15:10:44.222287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.991 [2024-12-09 15:10:44.305073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.991 [2024-12-09 15:10:44.344373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.991 [2024-12-09 15:10:44.344409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.991 [2024-12-09 15:10:44.344416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.991 [2024-12-09 15:10:44.344422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.991 [2024-12-09 15:10:44.344427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.991 [2024-12-09 15:10:44.344962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:42.991 true 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:42.991 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:43.250 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.250 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:43.519 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:43.519 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:43.519 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:43.782 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:44.040 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:44.040 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:44.040 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:44.298 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.298 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:44.298 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:44.298 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:44.298 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:44.600 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.600 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.h6MLBUCuZx 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.qbbHR4JqCQ 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.h6MLBUCuZx 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.qbbHR4JqCQ 00:17:44.914 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:45.196 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:45.464 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.h6MLBUCuZx 00:17:45.465 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h6MLBUCuZx 00:17:45.465 15:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.465 [2024-12-09 15:10:47.171740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.465 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.724 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.983 [2024-12-09 15:10:47.548689] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.983 [2024-12-09 15:10:47.548898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.983 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.983 malloc0 00:17:45.983 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.243 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h6MLBUCuZx 00:17:46.505 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:46.765 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.h6MLBUCuZx 00:17:56.740 Initializing NVMe Controllers 00:17:56.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.740 Initialization complete. Launching workers. 00:17:56.740 ======================================================== 00:17:56.740 Latency(us) 00:17:56.740 Device Information : IOPS MiB/s Average min max 00:17:56.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16911.13 66.06 3784.58 814.77 4601.02 00:17:56.740 ======================================================== 00:17:56.740 Total : 16911.13 66.06 3784.58 814.77 4601.02 00:17:56.740 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h6MLBUCuZx 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h6MLBUCuZx 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1443731 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1443731 /var/tmp/bdevperf.sock 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1443731 ']' 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.740 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.740 [2024-12-09 15:10:58.494158] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:17:56.740 [2024-12-09 15:10:58.494205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443731 ] 00:17:56.998 [2024-12-09 15:10:58.568148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.998 [2024-12-09 15:10:58.606861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.998 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.998 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.998 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h6MLBUCuZx 00:17:57.255 15:10:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.514 [2024-12-09 15:10:59.066612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.514 TLSTESTn1 00:17:57.514 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.514 Running I/O for 10 seconds... 00:17:59.823 5375.00 IOPS, 21.00 MiB/s [2024-12-09T14:11:02.554Z] 5443.50 IOPS, 21.26 MiB/s [2024-12-09T14:11:03.490Z] 5509.00 IOPS, 21.52 MiB/s [2024-12-09T14:11:04.426Z] 5559.25 IOPS, 21.72 MiB/s [2024-12-09T14:11:05.360Z] 5551.20 IOPS, 21.68 MiB/s [2024-12-09T14:11:06.294Z] 5568.17 IOPS, 21.75 MiB/s [2024-12-09T14:11:07.675Z] 5413.43 IOPS, 21.15 MiB/s [2024-12-09T14:11:08.611Z] 5302.88 IOPS, 20.71 MiB/s [2024-12-09T14:11:09.547Z] 5221.44 IOPS, 20.40 MiB/s [2024-12-09T14:11:09.547Z] 5147.60 IOPS, 20.11 MiB/s 00:18:07.752 Latency(us) 00:18:07.752 [2024-12-09T14:11:09.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.752 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.752 Verification LBA range: start 0x0 length 0x2000 00:18:07.752 TLSTESTn1 : 10.02 5148.25 20.11 0.00 0.00 24818.55 7552.24 25590.25 00:18:07.752 [2024-12-09T14:11:09.547Z] =================================================================================================================== 00:18:07.752 [2024-12-09T14:11:09.547Z] Total : 5148.25 20.11 0.00 0.00 24818.55 7552.24 25590.25 00:18:07.752 { 00:18:07.752 "results": [ 00:18:07.752 { 00:18:07.752 "job": "TLSTESTn1", 00:18:07.752 "core_mask": "0x4", 00:18:07.752 "workload": "verify", 00:18:07.752 "status": "finished", 00:18:07.752 "verify_range": { 00:18:07.752 "start": 0, 00:18:07.752 "length": 8192 00:18:07.752 }, 00:18:07.752 "queue_depth": 128, 00:18:07.752 "io_size": 4096, 00:18:07.752 "runtime": 10.023599, 00:18:07.752 "iops": 5148.25064330686, 00:18:07.752 "mibps": 20.110354075417423, 00:18:07.752 "io_failed": 0, 00:18:07.752 "io_timeout": 0, 00:18:07.752 "avg_latency_us": 24818.551488588924, 00:18:07.752 "min_latency_us": 7552.243809523809, 00:18:07.752 "max_latency_us": 25590.24761904762 00:18:07.752 } 00:18:07.752 ], 00:18:07.752 "core_count": 1 00:18:07.752 } 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1443731 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1443731 ']' 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1443731 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1443731 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1443731' 00:18:07.752 killing process with pid 1443731 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1443731 00:18:07.752 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.752 00:18:07.752 Latency(us) 00:18:07.752 [2024-12-09T14:11:09.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.752 [2024-12-09T14:11:09.547Z] =================================================================================================================== 00:18:07.752 [2024-12-09T14:11:09.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1443731 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qbbHR4JqCQ 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qbbHR4JqCQ 00:18:07.752 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qbbHR4JqCQ 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qbbHR4JqCQ 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1445373 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1445373 /var/tmp/bdevperf.sock 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1445373 ']' 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.753 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.011 [2024-12-09 15:11:09.575875] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:08.011 [2024-12-09 15:11:09.575925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445373 ] 00:18:08.011 [2024-12-09 15:11:09.650706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.011 [2024-12-09 15:11:09.688748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.011 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.011 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.011 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qbbHR4JqCQ 00:18:08.270 15:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.529 [2024-12-09 15:11:10.168045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.529 [2024-12-09 15:11:10.174210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:08.529 [2024-12-09 15:11:10.174321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246c700 (107): Transport endpoint is not connected 00:18:08.529 [2024-12-09 15:11:10.175315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246c700 (9): Bad file descriptor 00:18:08.529 [2024-12-09 15:11:10.176316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:08.529 [2024-12-09 15:11:10.176328] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:08.529 [2024-12-09 15:11:10.176335] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:08.529 [2024-12-09 15:11:10.176346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:08.529 request: 00:18:08.529 { 00:18:08.529 "name": "TLSTEST", 00:18:08.529 "trtype": "tcp", 00:18:08.529 "traddr": "10.0.0.2", 00:18:08.529 "adrfam": "ipv4", 00:18:08.529 "trsvcid": "4420", 00:18:08.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.529 "prchk_reftag": false, 00:18:08.529 "prchk_guard": false, 00:18:08.529 "hdgst": false, 00:18:08.529 "ddgst": false, 00:18:08.529 "psk": "key0", 00:18:08.529 "allow_unrecognized_csi": false, 00:18:08.529 "method": "bdev_nvme_attach_controller", 00:18:08.529 "req_id": 1 00:18:08.529 } 00:18:08.529 Got JSON-RPC error response 00:18:08.529 response: 00:18:08.529 { 00:18:08.529 "code": -5, 00:18:08.529 "message": "Input/output error" 00:18:08.529 } 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1445373 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1445373 ']' 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1445373 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445373 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445373' 00:18:08.529 killing process with pid 1445373 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1445373 00:18:08.529 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.529 00:18:08.529 Latency(us) 00:18:08.529 [2024-12-09T14:11:10.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.529 [2024-12-09T14:11:10.324Z] =================================================================================================================== 00:18:08.529 [2024-12-09T14:11:10.324Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.529 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1445373 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h6MLBUCuZx 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h6MLBUCuZx 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h6MLBUCuZx 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h6MLBUCuZx 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1445562 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1445562 /var/tmp/bdevperf.sock 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1445562 ']' 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.788 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.788 [2024-12-09 15:11:10.460286] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:08.788 [2024-12-09 15:11:10.460334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445562 ] 00:18:08.788 [2024-12-09 15:11:10.532358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.789 [2024-12-09 15:11:10.573075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.047 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.047 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.047 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h6MLBUCuZx 00:18:09.306 15:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:09.306 [2024-12-09 15:11:11.041838] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.306 [2024-12-09 15:11:11.046354] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.306 [2024-12-09 15:11:11.046376] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.306 [2024-12-09 15:11:11.046399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.306 [2024-12-09 15:11:11.046964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90f700 (107): Transport endpoint is not connected 00:18:09.306 [2024-12-09 15:11:11.047956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90f700 (9): Bad file descriptor 00:18:09.306 [2024-12-09 15:11:11.048958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:09.306 [2024-12-09 15:11:11.048970] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.306 [2024-12-09 15:11:11.048977] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:09.306 [2024-12-09 15:11:11.048988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:09.306 request: 00:18:09.306 { 00:18:09.306 "name": "TLSTEST", 00:18:09.306 "trtype": "tcp", 00:18:09.306 "traddr": "10.0.0.2", 00:18:09.306 "adrfam": "ipv4", 00:18:09.306 "trsvcid": "4420", 00:18:09.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.306 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:09.306 "prchk_reftag": false, 00:18:09.306 "prchk_guard": false, 00:18:09.306 "hdgst": false, 00:18:09.306 "ddgst": false, 00:18:09.306 "psk": "key0", 00:18:09.306 "allow_unrecognized_csi": false, 00:18:09.306 "method": "bdev_nvme_attach_controller", 00:18:09.306 "req_id": 1 00:18:09.306 } 00:18:09.306 Got JSON-RPC error response 00:18:09.306 response: 00:18:09.306 { 00:18:09.306 "code": -5, 00:18:09.306 "message": "Input/output error" 00:18:09.306 } 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1445562 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1445562 ']' 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1445562 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.306 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445562 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445562' 00:18:09.565 killing process with pid 1445562 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1445562 00:18:09.565 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.565 00:18:09.565 Latency(us) 00:18:09.565 [2024-12-09T14:11:11.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.565 [2024-12-09T14:11:11.360Z] =================================================================================================================== 00:18:09.565 [2024-12-09T14:11:11.360Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1445562 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h6MLBUCuZx 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h6MLBUCuZx 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h6MLBUCuZx 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h6MLBUCuZx 00:18:09.565 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1445795 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1445795 /var/tmp/bdevperf.sock 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1445795 ']' 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.566 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 [2024-12-09 15:11:11.327898] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:09.566 [2024-12-09 15:11:11.327947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445795 ] 00:18:09.824 [2024-12-09 15:11:11.400609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.824 [2024-12-09 15:11:11.437401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.824 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.824 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.824 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h6MLBUCuZx 00:18:10.082 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.341 [2024-12-09 15:11:11.892472] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.341 [2024-12-09 15:11:11.898512] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.341 [2024-12-09 15:11:11.898532] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.341 [2024-12-09 15:11:11.898556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.341 [2024-12-09 15:11:11.898836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb1700 (107): Transport endpoint is not connected 00:18:10.341 [2024-12-09 15:11:11.899829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb1700 (9): Bad file descriptor 00:18:10.341 [2024-12-09 15:11:11.900830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:10.341 [2024-12-09 15:11:11.900842] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.341 [2024-12-09 15:11:11.900851] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:10.341 [2024-12-09 15:11:11.900862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:10.341 request: 00:18:10.341 { 00:18:10.341 "name": "TLSTEST", 00:18:10.341 "trtype": "tcp", 00:18:10.341 "traddr": "10.0.0.2", 00:18:10.341 "adrfam": "ipv4", 00:18:10.341 "trsvcid": "4420", 00:18:10.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.341 "prchk_reftag": false, 00:18:10.341 "prchk_guard": false, 00:18:10.341 "hdgst": false, 00:18:10.341 "ddgst": false, 00:18:10.341 "psk": "key0", 00:18:10.341 "allow_unrecognized_csi": false, 00:18:10.341 "method": "bdev_nvme_attach_controller", 00:18:10.341 "req_id": 1 00:18:10.341 } 00:18:10.341 Got JSON-RPC error response 00:18:10.341 response: 00:18:10.341 { 00:18:10.341 "code": -5, 00:18:10.341 "message": "Input/output error" 00:18:10.341 } 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1445795 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1445795 ']' 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1445795 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445795 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445795' 00:18:10.341 killing process with pid 1445795 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1445795 00:18:10.341 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.341 00:18:10.341 Latency(us) 00:18:10.341 [2024-12-09T14:11:12.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.341 [2024-12-09T14:11:12.136Z] =================================================================================================================== 00:18:10.341 [2024-12-09T14:11:12.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.341 15:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1445795 00:18:10.341 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.341 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.341 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.341 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.341 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.342 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.342 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.342 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.342 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1445815 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1445815 /var/tmp/bdevperf.sock 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1445815 ']' 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.601 [2024-12-09 15:11:12.187882] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:10.601 [2024-12-09 15:11:12.187928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445815 ] 00:18:10.601 [2024-12-09 15:11:12.260136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.601 [2024-12-09 15:11:12.296743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.601 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.860 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.860 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:10.860 [2024-12-09 15:11:12.579897] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:10.860 [2024-12-09 15:11:12.579930] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:10.860 request: 00:18:10.860 { 00:18:10.860 "name": "key0", 00:18:10.860 "path": "", 00:18:10.860 "method": "keyring_file_add_key", 00:18:10.860 "req_id": 1 00:18:10.860 } 00:18:10.860 Got JSON-RPC error response 00:18:10.860 response: 00:18:10.860 { 00:18:10.860 "code": -1, 00:18:10.860 "message": "Operation not permitted" 00:18:10.860 } 00:18:10.860 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.119 [2024-12-09 15:11:12.776489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.119 [2024-12-09 15:11:12.776517] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:11.119 request: 00:18:11.119 { 00:18:11.119 "name": "TLSTEST", 00:18:11.119 "trtype": "tcp", 00:18:11.119 "traddr": "10.0.0.2", 00:18:11.119 "adrfam": "ipv4", 00:18:11.119 "trsvcid": "4420", 00:18:11.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.119 "prchk_reftag": false, 00:18:11.119 "prchk_guard": false, 00:18:11.119 "hdgst": false, 00:18:11.119 "ddgst": false, 00:18:11.119 "psk": "key0", 00:18:11.119 "allow_unrecognized_csi": false, 00:18:11.119 "method": "bdev_nvme_attach_controller", 00:18:11.119 "req_id": 1 00:18:11.119 } 00:18:11.119 Got JSON-RPC error response 00:18:11.119 response: 00:18:11.119 { 00:18:11.119 "code": -126, 00:18:11.119 "message": "Required key not available" 00:18:11.119 } 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1445815 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1445815 ']' 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1445815 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445815 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445815' 00:18:11.119 killing process with pid 1445815 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1445815 00:18:11.119 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.119 00:18:11.119 Latency(us) 00:18:11.119 [2024-12-09T14:11:12.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.119 [2024-12-09T14:11:12.914Z] =================================================================================================================== 00:18:11.119 [2024-12-09T14:11:12.914Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.119 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1445815 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1441251 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1441251 ']' 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1441251 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441251 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441251' 00:18:11.378 killing process with pid 1441251 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1441251 00:18:11.378 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1441251 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.TbGNrubY87 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.TbGNrubY87 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1446056 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1446056 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1446056 ']' 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.637 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.637 [2024-12-09 15:11:13.322150] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:11.638 [2024-12-09 15:11:13.322196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.638 [2024-12-09 15:11:13.400747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.896 [2024-12-09 15:11:13.440157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.896 [2024-12-09 15:11:13.440190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.896 [2024-12-09 15:11:13.440197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.896 [2024-12-09 15:11:13.440203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.896 [2024-12-09 15:11:13.440208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.896 [2024-12-09 15:11:13.440747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TbGNrubY87 00:18:11.896 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.155 [2024-12-09 15:11:13.751349] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.155 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.413 15:11:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.414 [2024-12-09 15:11:14.164399] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.414 [2024-12-09 15:11:14.164593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.414 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.672 malloc0 00:18:12.672 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.930 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TbGNrubY87 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TbGNrubY87 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1446313 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1446313 /var/tmp/bdevperf.sock 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1446313 ']' 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.189 15:11:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.453 [2024-12-09 15:11:15.008635] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:13.453 [2024-12-09 15:11:15.008682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446313 ] 00:18:13.453 [2024-12-09 15:11:15.083654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.453 [2024-12-09 15:11:15.122970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.453 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.453 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.453 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:13.715 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.973 [2024-12-09 15:11:15.598404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.973 TLSTESTn1 00:18:13.973 15:11:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.232 Running I/O for 10 seconds... 00:18:16.104 5343.00 IOPS, 20.87 MiB/s [2024-12-09T14:11:18.835Z] 5388.00 IOPS, 21.05 MiB/s [2024-12-09T14:11:20.211Z] 5176.33 IOPS, 20.22 MiB/s [2024-12-09T14:11:21.147Z] 5097.75 IOPS, 19.91 MiB/s [2024-12-09T14:11:22.084Z] 5063.40 IOPS, 19.78 MiB/s [2024-12-09T14:11:23.019Z] 5032.33 IOPS, 19.66 MiB/s [2024-12-09T14:11:23.955Z] 5024.29 IOPS, 19.63 MiB/s [2024-12-09T14:11:24.890Z] 5013.75 IOPS, 19.58 MiB/s [2024-12-09T14:11:25.826Z] 5024.33 IOPS, 19.63 MiB/s [2024-12-09T14:11:25.826Z] 5014.10 IOPS, 19.59 MiB/s 00:18:24.031 Latency(us) 00:18:24.031 [2024-12-09T14:11:25.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.031 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.031 Verification LBA range: start 0x0 length 0x2000 00:18:24.031 TLSTESTn1 : 10.02 5018.52 19.60 0.00 0.00 25469.47 6023.07 30583.47 00:18:24.031 [2024-12-09T14:11:25.826Z] =================================================================================================================== 00:18:24.031 [2024-12-09T14:11:25.826Z] Total : 5018.52 19.60 0.00 0.00 25469.47 6023.07 30583.47 00:18:24.031 { 00:18:24.031 "results": [ 00:18:24.031 { 00:18:24.031 "job": "TLSTESTn1", 00:18:24.031 "core_mask": "0x4", 00:18:24.031 "workload": "verify", 00:18:24.031 "status": "finished", 00:18:24.031 "verify_range": { 00:18:24.031 "start": 0, 00:18:24.031 "length": 8192 00:18:24.031 }, 00:18:24.031 "queue_depth": 128, 00:18:24.031 "io_size": 4096, 00:18:24.031 "runtime": 10.016498, 00:18:24.031 "iops": 5018.520444969889, 00:18:24.031 "mibps": 19.603595488163627, 00:18:24.031 "io_failed": 0, 00:18:24.031 "io_timeout": 0, 00:18:24.031 "avg_latency_us": 25469.474853490057, 00:18:24.031 "min_latency_us": 6023.070476190476, 00:18:24.031 "max_latency_us": 30583.466666666667 00:18:24.031 } 00:18:24.031 ], 00:18:24.031 "core_count": 1 00:18:24.031 } 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1446313 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1446313 ']' 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1446313 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446313 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446313' 00:18:24.290 killing process with pid 1446313 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1446313 00:18:24.290 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.290 00:18:24.290 Latency(us) 00:18:24.290 [2024-12-09T14:11:26.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.290 [2024-12-09T14:11:26.085Z] =================================================================================================================== 00:18:24.290 [2024-12-09T14:11:26.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.290 15:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1446313 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.TbGNrubY87 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TbGNrubY87 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TbGNrubY87 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TbGNrubY87 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TbGNrubY87 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1448124 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1448124 /var/tmp/bdevperf.sock 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1448124 ']' 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.290 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.549 [2024-12-09 15:11:26.105259] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:24.549 [2024-12-09 15:11:26.105310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448124 ] 00:18:24.549 [2024-12-09 15:11:26.178390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.549 [2024-12-09 15:11:26.214143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.549 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.549 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.549 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:24.808 [2024-12-09 15:11:26.496558] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TbGNrubY87': 0100666 00:18:24.808 [2024-12-09 15:11:26.496590] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:24.808 request: 00:18:24.808 { 00:18:24.808 "name": "key0", 00:18:24.808 "path": "/tmp/tmp.TbGNrubY87", 00:18:24.808 "method": "keyring_file_add_key", 00:18:24.808 "req_id": 1 00:18:24.808 } 00:18:24.808 Got JSON-RPC error response 00:18:24.808 response: 00:18:24.808 { 00:18:24.808 "code": -1, 00:18:24.808 "message": "Operation not permitted" 00:18:24.808 } 00:18:24.808 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.067 [2024-12-09 15:11:26.713194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.067 [2024-12-09 15:11:26.713230] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:25.067 request: 00:18:25.067 { 00:18:25.067 "name": "TLSTEST", 00:18:25.067 "trtype": "tcp", 00:18:25.067 "traddr": "10.0.0.2", 00:18:25.067 "adrfam": "ipv4", 00:18:25.067 "trsvcid": "4420", 00:18:25.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.067 "prchk_reftag": false, 00:18:25.067 "prchk_guard": false, 00:18:25.067 "hdgst": false, 00:18:25.067 "ddgst": false, 00:18:25.067 "psk": "key0", 00:18:25.067 "allow_unrecognized_csi": false, 00:18:25.067 "method": "bdev_nvme_attach_controller", 00:18:25.067 "req_id": 1 00:18:25.067 } 00:18:25.067 Got JSON-RPC error response 00:18:25.067 response: 00:18:25.067 { 00:18:25.067 "code": -126, 00:18:25.067 "message": "Required key not available" 00:18:25.067 } 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1448124 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1448124 ']' 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1448124 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448124 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448124' 00:18:25.067 killing process with pid 1448124 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1448124 00:18:25.067 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.067 00:18:25.067 Latency(us) 00:18:25.067 [2024-12-09T14:11:26.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.067 [2024-12-09T14:11:26.862Z] =================================================================================================================== 00:18:25.067 [2024-12-09T14:11:26.862Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.067 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1448124 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1446056 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1446056 ']' 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1446056 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446056 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446056' 00:18:25.326 killing process with pid 1446056 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1446056 00:18:25.326 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1446056 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1448360 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1448360 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1448360 ']' 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.585 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.585 [2024-12-09 15:11:27.215617] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:25.585 [2024-12-09 15:11:27.215663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.585 [2024-12-09 15:11:27.290357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.585 [2024-12-09 15:11:27.324119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.585 [2024-12-09 15:11:27.324155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.585 [2024-12-09 15:11:27.324162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.585 [2024-12-09 15:11:27.324168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.585 [2024-12-09 15:11:27.324174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.585 [2024-12-09 15:11:27.324738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TbGNrubY87 00:18:25.845 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.845 [2024-12-09 15:11:27.636512] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.104 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.104 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.363 [2024-12-09 15:11:28.033529] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.363 [2024-12-09 15:11:28.033739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.363 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:26.622 malloc0 00:18:26.622 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.881 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:26.881 [2024-12-09 15:11:28.602881] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TbGNrubY87': 0100666 00:18:26.881 [2024-12-09 15:11:28.602906] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:26.881 request: 00:18:26.881 { 00:18:26.881 "name": "key0", 00:18:26.881 "path": "/tmp/tmp.TbGNrubY87", 00:18:26.881 "method": "keyring_file_add_key", 00:18:26.881 "req_id": 1 00:18:26.881 } 00:18:26.881 Got JSON-RPC error response 00:18:26.881 response: 00:18:26.881 { 00:18:26.881 "code": -1, 00:18:26.881 "message": "Operation not permitted" 00:18:26.881 } 00:18:26.881 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.140 [2024-12-09 15:11:28.783368] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:27.140 [2024-12-09 15:11:28.783399] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:27.140 request: 00:18:27.140 { 00:18:27.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.140 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.140 "psk": "key0", 00:18:27.140 "method": "nvmf_subsystem_add_host", 00:18:27.140 "req_id": 1 00:18:27.140 } 00:18:27.140 Got JSON-RPC error response 00:18:27.140 response: 00:18:27.140 { 00:18:27.140 "code": -32603, 00:18:27.140 "message": "Internal error" 00:18:27.140 } 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1448360 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1448360 ']' 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1448360 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448360 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448360' 00:18:27.140 killing process with pid 1448360 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1448360 00:18:27.140 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1448360 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.TbGNrubY87 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1448623 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1448623 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1448623 ']' 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.400 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.400 [2024-12-09 15:11:29.069867] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:27.400 [2024-12-09 15:11:29.069911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.400 [2024-12-09 15:11:29.149490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.400 [2024-12-09 15:11:29.185336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.400 [2024-12-09 15:11:29.185374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.400 [2024-12-09 15:11:29.185381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.400 [2024-12-09 15:11:29.185387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.400 [2024-12-09 15:11:29.185392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.400 [2024-12-09 15:11:29.185920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TbGNrubY87 00:18:27.660 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:27.919 [2024-12-09 15:11:29.495000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.919 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.919 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.178 [2024-12-09 15:11:29.863926] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.178 [2024-12-09 15:11:29.864114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.178 15:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.436 malloc0 00:18:28.437 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.695 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:28.695 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1448954 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1448954 /var/tmp/bdevperf.sock 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1448954 ']' 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.954 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.954 [2024-12-09 15:11:30.699653] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:28.954 [2024-12-09 15:11:30.699707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448954 ] 00:18:29.213 [2024-12-09 15:11:30.775907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.213 [2024-12-09 15:11:30.815079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.213 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.213 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.213 15:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:29.472 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.730 [2024-12-09 15:11:31.266806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.730 TLSTESTn1 00:18:29.730 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:29.990 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:29.990 "subsystems": [ 00:18:29.990 { 00:18:29.990 "subsystem": "keyring", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "keyring_file_add_key", 00:18:29.990 "params": { 00:18:29.990 "name": "key0", 00:18:29.990 "path": "/tmp/tmp.TbGNrubY87" 00:18:29.990 } 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "iobuf", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "iobuf_set_options", 00:18:29.990 "params": { 00:18:29.990 "small_pool_count": 8192, 00:18:29.990 "large_pool_count": 1024, 00:18:29.990 "small_bufsize": 8192, 00:18:29.990 "large_bufsize": 135168, 00:18:29.990 "enable_numa": false 00:18:29.990 } 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "sock", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "sock_set_default_impl", 00:18:29.990 "params": { 00:18:29.990 "impl_name": "posix" 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "sock_impl_set_options", 00:18:29.990 "params": { 00:18:29.990 "impl_name": "ssl", 00:18:29.990 "recv_buf_size": 4096, 00:18:29.990 "send_buf_size": 4096, 00:18:29.990 "enable_recv_pipe": true, 00:18:29.990 "enable_quickack": false, 00:18:29.990 "enable_placement_id": 0, 00:18:29.990 "enable_zerocopy_send_server": true, 00:18:29.990 "enable_zerocopy_send_client": false, 00:18:29.990 "zerocopy_threshold": 0, 00:18:29.990 "tls_version": 0, 00:18:29.990 "enable_ktls": false 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "sock_impl_set_options", 00:18:29.990 "params": { 00:18:29.990 "impl_name": "posix", 00:18:29.990 "recv_buf_size": 2097152, 00:18:29.990 "send_buf_size": 2097152, 00:18:29.990 "enable_recv_pipe": true, 00:18:29.990 "enable_quickack": false, 00:18:29.990 "enable_placement_id": 0, 00:18:29.990 "enable_zerocopy_send_server": true, 00:18:29.990 "enable_zerocopy_send_client": false, 00:18:29.990 "zerocopy_threshold": 0, 00:18:29.990 "tls_version": 0, 00:18:29.990 "enable_ktls": false 00:18:29.990 } 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "vmd", 00:18:29.990 "config": [] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "accel", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "accel_set_options", 00:18:29.990 "params": { 00:18:29.990 "small_cache_size": 128, 00:18:29.990 "large_cache_size": 16, 00:18:29.990 "task_count": 2048, 00:18:29.990 "sequence_count": 2048, 00:18:29.990 "buf_count": 2048 00:18:29.990 } 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "bdev", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "bdev_set_options", 00:18:29.990 "params": { 00:18:29.990 "bdev_io_pool_size": 65535, 00:18:29.990 "bdev_io_cache_size": 256, 00:18:29.990 "bdev_auto_examine": true, 00:18:29.990 "iobuf_small_cache_size": 128, 00:18:29.990 "iobuf_large_cache_size": 16 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_raid_set_options", 00:18:29.990 "params": { 00:18:29.990 "process_window_size_kb": 1024, 00:18:29.990 "process_max_bandwidth_mb_sec": 0 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_iscsi_set_options", 00:18:29.990 "params": { 00:18:29.990 "timeout_sec": 30 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_nvme_set_options", 00:18:29.990 "params": { 00:18:29.990 "action_on_timeout": "none", 00:18:29.990 "timeout_us": 0, 00:18:29.990 "timeout_admin_us": 0, 00:18:29.990 "keep_alive_timeout_ms": 10000, 00:18:29.990 "arbitration_burst": 0, 00:18:29.990 "low_priority_weight": 0, 00:18:29.990 "medium_priority_weight": 0, 00:18:29.990 "high_priority_weight": 0, 00:18:29.990 "nvme_adminq_poll_period_us": 10000, 00:18:29.990 "nvme_ioq_poll_period_us": 0, 00:18:29.990 "io_queue_requests": 0, 00:18:29.990 "delay_cmd_submit": true, 00:18:29.990 "transport_retry_count": 4, 00:18:29.990 "bdev_retry_count": 3, 00:18:29.990 "transport_ack_timeout": 0, 00:18:29.990 "ctrlr_loss_timeout_sec": 0, 00:18:29.990 "reconnect_delay_sec": 0, 00:18:29.990 "fast_io_fail_timeout_sec": 0, 00:18:29.990 "disable_auto_failback": false, 00:18:29.990 "generate_uuids": false, 00:18:29.990 "transport_tos": 0, 00:18:29.990 "nvme_error_stat": false, 00:18:29.990 "rdma_srq_size": 0, 00:18:29.990 "io_path_stat": false, 00:18:29.990 "allow_accel_sequence": false, 00:18:29.990 "rdma_max_cq_size": 0, 00:18:29.990 "rdma_cm_event_timeout_ms": 0, 00:18:29.990 "dhchap_digests": [ 00:18:29.990 "sha256", 00:18:29.990 "sha384", 00:18:29.990 "sha512" 00:18:29.990 ], 00:18:29.990 "dhchap_dhgroups": [ 00:18:29.990 "null", 00:18:29.990 "ffdhe2048", 00:18:29.990 "ffdhe3072", 00:18:29.990 "ffdhe4096", 00:18:29.990 "ffdhe6144", 00:18:29.990 "ffdhe8192" 00:18:29.990 ] 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_nvme_set_hotplug", 00:18:29.990 "params": { 00:18:29.990 "period_us": 100000, 00:18:29.990 "enable": false 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_malloc_create", 00:18:29.990 "params": { 00:18:29.990 "name": "malloc0", 00:18:29.990 "num_blocks": 8192, 00:18:29.990 "block_size": 4096, 00:18:29.990 "physical_block_size": 4096, 00:18:29.990 "uuid": "daff0b61-f929-4d66-9437-18019e130af4", 00:18:29.990 "optimal_io_boundary": 0, 00:18:29.990 "md_size": 0, 00:18:29.990 "dif_type": 0, 00:18:29.990 "dif_is_head_of_md": false, 00:18:29.990 "dif_pi_format": 0 00:18:29.990 } 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "method": "bdev_wait_for_examine" 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "nbd", 00:18:29.990 "config": [] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "scheduler", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "framework_set_scheduler", 00:18:29.990 "params": { 00:18:29.990 "name": "static" 00:18:29.990 } 00:18:29.990 } 00:18:29.990 ] 00:18:29.990 }, 00:18:29.990 { 00:18:29.990 "subsystem": "nvmf", 00:18:29.990 "config": [ 00:18:29.990 { 00:18:29.990 "method": "nvmf_set_config", 00:18:29.990 "params": { 00:18:29.990 "discovery_filter": "match_any", 00:18:29.991 "admin_cmd_passthru": { 00:18:29.991 "identify_ctrlr": false 00:18:29.991 }, 00:18:29.991 "dhchap_digests": [ 00:18:29.991 "sha256", 00:18:29.991 "sha384", 00:18:29.991 "sha512" 00:18:29.991 ], 00:18:29.991 "dhchap_dhgroups": [ 00:18:29.991 "null", 00:18:29.991 "ffdhe2048", 00:18:29.991 "ffdhe3072", 00:18:29.991 "ffdhe4096", 00:18:29.991 "ffdhe6144", 00:18:29.991 "ffdhe8192" 00:18:29.991 ] 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_set_max_subsystems", 00:18:29.991 "params": { 00:18:29.991 "max_subsystems": 1024 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_set_crdt", 00:18:29.991 "params": { 00:18:29.991 "crdt1": 0, 00:18:29.991 "crdt2": 0, 00:18:29.991 "crdt3": 0 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_create_transport", 00:18:29.991 "params": { 00:18:29.991 "trtype": "TCP", 00:18:29.991 "max_queue_depth": 128, 00:18:29.991 "max_io_qpairs_per_ctrlr": 127, 00:18:29.991 "in_capsule_data_size": 4096, 00:18:29.991 "max_io_size": 131072, 00:18:29.991 "io_unit_size": 131072, 00:18:29.991 "max_aq_depth": 128, 00:18:29.991 "num_shared_buffers": 511, 00:18:29.991 "buf_cache_size": 4294967295, 00:18:29.991 "dif_insert_or_strip": false, 00:18:29.991 "zcopy": false, 00:18:29.991 "c2h_success": false, 00:18:29.991 "sock_priority": 0, 00:18:29.991 "abort_timeout_sec": 1, 00:18:29.991 "ack_timeout": 0, 00:18:29.991 "data_wr_pool_size": 0 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_create_subsystem", 00:18:29.991 "params": { 00:18:29.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.991 "allow_any_host": false, 00:18:29.991 "serial_number": "SPDK00000000000001", 00:18:29.991 "model_number": "SPDK bdev Controller", 00:18:29.991 "max_namespaces": 10, 00:18:29.991 "min_cntlid": 1, 00:18:29.991 "max_cntlid": 65519, 00:18:29.991 "ana_reporting": false 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_subsystem_add_host", 00:18:29.991 "params": { 00:18:29.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.991 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.991 "psk": "key0" 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_subsystem_add_ns", 00:18:29.991 "params": { 00:18:29.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.991 "namespace": { 00:18:29.991 "nsid": 1, 00:18:29.991 "bdev_name": "malloc0", 00:18:29.991 "nguid": "DAFF0B61F9294D66943718019E130AF4", 00:18:29.991 "uuid": "daff0b61-f929-4d66-9437-18019e130af4", 00:18:29.991 "no_auto_visible": false 00:18:29.991 } 00:18:29.991 } 00:18:29.991 }, 00:18:29.991 { 00:18:29.991 "method": "nvmf_subsystem_add_listener", 00:18:29.991 "params": { 00:18:29.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.991 "listen_address": { 00:18:29.991 "trtype": "TCP", 00:18:29.991 "adrfam": "IPv4", 00:18:29.991 "traddr": "10.0.0.2", 00:18:29.991 "trsvcid": "4420" 00:18:29.991 }, 00:18:29.991 "secure_channel": true 00:18:29.991 } 00:18:29.991 } 00:18:29.991 ] 00:18:29.991 } 00:18:29.991 ] 00:18:29.991 }' 00:18:29.991 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:30.251 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:30.251 "subsystems": [ 00:18:30.251 { 00:18:30.251 "subsystem": "keyring", 00:18:30.251 "config": [ 00:18:30.251 { 00:18:30.251 "method": "keyring_file_add_key", 00:18:30.251 "params": { 00:18:30.251 "name": "key0", 00:18:30.251 "path": "/tmp/tmp.TbGNrubY87" 00:18:30.251 } 00:18:30.251 } 00:18:30.251 ] 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "subsystem": "iobuf", 00:18:30.251 "config": [ 00:18:30.251 { 00:18:30.251 "method": "iobuf_set_options", 00:18:30.251 "params": { 00:18:30.251 "small_pool_count": 8192, 00:18:30.251 "large_pool_count": 1024, 00:18:30.251 "small_bufsize": 8192, 00:18:30.251 "large_bufsize": 135168, 00:18:30.251 "enable_numa": false 00:18:30.251 } 00:18:30.251 } 00:18:30.251 ] 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "subsystem": "sock", 00:18:30.251 "config": [ 00:18:30.251 { 00:18:30.251 "method": "sock_set_default_impl", 00:18:30.251 "params": { 00:18:30.251 "impl_name": "posix" 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "sock_impl_set_options", 00:18:30.251 "params": { 00:18:30.251 "impl_name": "ssl", 00:18:30.251 "recv_buf_size": 4096, 00:18:30.251 "send_buf_size": 4096, 00:18:30.251 "enable_recv_pipe": true, 00:18:30.251 "enable_quickack": false, 00:18:30.251 "enable_placement_id": 0, 00:18:30.251 "enable_zerocopy_send_server": true, 00:18:30.251 "enable_zerocopy_send_client": false, 00:18:30.251 "zerocopy_threshold": 0, 00:18:30.251 "tls_version": 0, 00:18:30.251 "enable_ktls": false 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "sock_impl_set_options", 00:18:30.251 "params": { 00:18:30.251 "impl_name": "posix", 00:18:30.251 "recv_buf_size": 2097152, 00:18:30.251 "send_buf_size": 2097152, 00:18:30.251 "enable_recv_pipe": true, 00:18:30.251 "enable_quickack": false, 00:18:30.251 "enable_placement_id": 0, 00:18:30.251 "enable_zerocopy_send_server": true, 00:18:30.251 "enable_zerocopy_send_client": false, 00:18:30.251 "zerocopy_threshold": 0, 00:18:30.251 "tls_version": 0, 00:18:30.251 "enable_ktls": false 00:18:30.251 } 00:18:30.251 } 00:18:30.251 ] 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "subsystem": "vmd", 00:18:30.251 "config": [] 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "subsystem": "accel", 00:18:30.251 "config": [ 00:18:30.251 { 00:18:30.251 "method": "accel_set_options", 00:18:30.251 "params": { 00:18:30.251 "small_cache_size": 128, 00:18:30.251 "large_cache_size": 16, 00:18:30.251 "task_count": 2048, 00:18:30.251 "sequence_count": 2048, 00:18:30.251 "buf_count": 2048 00:18:30.251 } 00:18:30.251 } 00:18:30.251 ] 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "subsystem": "bdev", 00:18:30.251 "config": [ 00:18:30.251 { 00:18:30.251 "method": "bdev_set_options", 00:18:30.251 "params": { 00:18:30.251 "bdev_io_pool_size": 65535, 00:18:30.251 "bdev_io_cache_size": 256, 00:18:30.251 "bdev_auto_examine": true, 00:18:30.251 "iobuf_small_cache_size": 128, 00:18:30.251 "iobuf_large_cache_size": 16 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "bdev_raid_set_options", 00:18:30.251 "params": { 00:18:30.251 "process_window_size_kb": 1024, 00:18:30.251 "process_max_bandwidth_mb_sec": 0 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "bdev_iscsi_set_options", 00:18:30.251 "params": { 00:18:30.251 "timeout_sec": 30 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "bdev_nvme_set_options", 00:18:30.251 "params": { 00:18:30.251 "action_on_timeout": "none", 00:18:30.251 "timeout_us": 0, 00:18:30.251 "timeout_admin_us": 0, 00:18:30.251 "keep_alive_timeout_ms": 10000, 00:18:30.251 "arbitration_burst": 0, 00:18:30.251 "low_priority_weight": 0, 00:18:30.251 "medium_priority_weight": 0, 00:18:30.251 "high_priority_weight": 0, 00:18:30.251 "nvme_adminq_poll_period_us": 10000, 00:18:30.251 "nvme_ioq_poll_period_us": 0, 00:18:30.251 "io_queue_requests": 512, 00:18:30.251 "delay_cmd_submit": true, 00:18:30.251 "transport_retry_count": 4, 00:18:30.251 "bdev_retry_count": 3, 00:18:30.251 "transport_ack_timeout": 0, 00:18:30.251 "ctrlr_loss_timeout_sec": 0, 00:18:30.251 "reconnect_delay_sec": 0, 00:18:30.251 "fast_io_fail_timeout_sec": 0, 00:18:30.251 "disable_auto_failback": false, 00:18:30.251 "generate_uuids": false, 00:18:30.251 "transport_tos": 0, 00:18:30.251 "nvme_error_stat": false, 00:18:30.251 "rdma_srq_size": 0, 00:18:30.251 "io_path_stat": false, 00:18:30.251 "allow_accel_sequence": false, 00:18:30.251 "rdma_max_cq_size": 0, 00:18:30.251 "rdma_cm_event_timeout_ms": 0, 00:18:30.251 "dhchap_digests": [ 00:18:30.251 "sha256", 00:18:30.251 "sha384", 00:18:30.251 "sha512" 00:18:30.251 ], 00:18:30.251 "dhchap_dhgroups": [ 00:18:30.251 "null", 00:18:30.251 "ffdhe2048", 00:18:30.251 "ffdhe3072", 00:18:30.251 "ffdhe4096", 00:18:30.251 "ffdhe6144", 00:18:30.251 "ffdhe8192" 00:18:30.251 ] 00:18:30.251 } 00:18:30.251 }, 00:18:30.251 { 00:18:30.251 "method": "bdev_nvme_attach_controller", 00:18:30.251 "params": { 00:18:30.252 "name": "TLSTEST", 00:18:30.252 "trtype": "TCP", 00:18:30.252 "adrfam": "IPv4", 00:18:30.252 "traddr": "10.0.0.2", 00:18:30.252 "trsvcid": "4420", 00:18:30.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.252 "prchk_reftag": false, 00:18:30.252 "prchk_guard": false, 00:18:30.252 "ctrlr_loss_timeout_sec": 0, 00:18:30.252 "reconnect_delay_sec": 0, 00:18:30.252 "fast_io_fail_timeout_sec": 0, 00:18:30.252 "psk": "key0", 00:18:30.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.252 "hdgst": false, 00:18:30.252 "ddgst": false, 00:18:30.252 "multipath": "multipath" 00:18:30.252 } 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "method": "bdev_nvme_set_hotplug", 00:18:30.252 "params": { 00:18:30.252 "period_us": 100000, 00:18:30.252 "enable": false 00:18:30.252 } 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "method": "bdev_wait_for_examine" 00:18:30.252 } 00:18:30.252 ] 00:18:30.252 }, 00:18:30.252 { 00:18:30.252 "subsystem": "nbd", 00:18:30.252 "config": [] 00:18:30.252 } 00:18:30.252 ] 00:18:30.252 }' 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1448954 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1448954 ']' 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1448954 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448954 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448954' 00:18:30.252 killing process with pid 1448954 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1448954 00:18:30.252 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.252 00:18:30.252 Latency(us) 00:18:30.252 [2024-12-09T14:11:32.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.252 [2024-12-09T14:11:32.047Z] =================================================================================================================== 00:18:30.252 [2024-12-09T14:11:32.047Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.252 15:11:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1448954 00:18:30.511 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1448623 00:18:30.511 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1448623 ']' 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1448623 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448623 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448623' 00:18:30.512 killing process with pid 1448623 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1448623 00:18:30.512 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1448623 00:18:30.771 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:30.771 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.771 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.771 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:30.771 "subsystems": [ 00:18:30.771 { 00:18:30.771 "subsystem": "keyring", 00:18:30.771 "config": [ 00:18:30.771 { 00:18:30.771 "method": "keyring_file_add_key", 00:18:30.771 "params": { 00:18:30.771 "name": "key0", 00:18:30.771 "path": "/tmp/tmp.TbGNrubY87" 00:18:30.771 } 00:18:30.771 } 00:18:30.771 ] 00:18:30.771 }, 00:18:30.771 { 00:18:30.771 "subsystem": "iobuf", 00:18:30.771 "config": [ 00:18:30.771 { 00:18:30.771 "method": "iobuf_set_options", 00:18:30.771 "params": { 00:18:30.771 "small_pool_count": 8192, 00:18:30.771 "large_pool_count": 1024, 00:18:30.771 "small_bufsize": 8192, 00:18:30.771 "large_bufsize": 135168, 00:18:30.771 "enable_numa": false 00:18:30.771 } 00:18:30.771 } 00:18:30.771 ] 00:18:30.771 }, 00:18:30.771 { 00:18:30.771 "subsystem": "sock", 00:18:30.771 "config": [ 00:18:30.771 { 00:18:30.771 "method": "sock_set_default_impl", 00:18:30.771 "params": { 00:18:30.772 "impl_name": "posix" 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "sock_impl_set_options", 00:18:30.772 "params": { 00:18:30.772 "impl_name": "ssl", 00:18:30.772 "recv_buf_size": 4096, 00:18:30.772 "send_buf_size": 4096, 00:18:30.772 "enable_recv_pipe": true, 00:18:30.772 "enable_quickack": false, 00:18:30.772 "enable_placement_id": 0, 00:18:30.772 "enable_zerocopy_send_server": true, 00:18:30.772 "enable_zerocopy_send_client": false, 00:18:30.772 "zerocopy_threshold": 0, 00:18:30.772 "tls_version": 0, 00:18:30.772 "enable_ktls": false 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "sock_impl_set_options", 00:18:30.772 "params": { 00:18:30.772 "impl_name": "posix", 00:18:30.772 "recv_buf_size": 2097152, 00:18:30.772 "send_buf_size": 2097152, 00:18:30.772 "enable_recv_pipe": true, 00:18:30.772 "enable_quickack": false, 00:18:30.772 "enable_placement_id": 0, 00:18:30.772 "enable_zerocopy_send_server": true, 00:18:30.772 "enable_zerocopy_send_client": false, 00:18:30.772 "zerocopy_threshold": 0, 00:18:30.772 "tls_version": 0, 00:18:30.772 "enable_ktls": false 00:18:30.772 } 00:18:30.772 } 00:18:30.772 ] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "vmd", 00:18:30.772 "config": [] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "accel", 00:18:30.772 "config": [ 00:18:30.772 { 00:18:30.772 "method": "accel_set_options", 00:18:30.772 "params": { 00:18:30.772 "small_cache_size": 128, 00:18:30.772 "large_cache_size": 16, 00:18:30.772 "task_count": 2048, 00:18:30.772 "sequence_count": 2048, 00:18:30.772 "buf_count": 2048 00:18:30.772 } 00:18:30.772 } 00:18:30.772 ] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "bdev", 00:18:30.772 "config": [ 00:18:30.772 { 00:18:30.772 "method": "bdev_set_options", 00:18:30.772 "params": { 00:18:30.772 "bdev_io_pool_size": 65535, 00:18:30.772 "bdev_io_cache_size": 256, 00:18:30.772 "bdev_auto_examine": true, 00:18:30.772 "iobuf_small_cache_size": 128, 00:18:30.772 "iobuf_large_cache_size": 16 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_raid_set_options", 00:18:30.772 "params": { 00:18:30.772 "process_window_size_kb": 1024, 00:18:30.772 "process_max_bandwidth_mb_sec": 0 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_iscsi_set_options", 00:18:30.772 "params": { 00:18:30.772 "timeout_sec": 30 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_nvme_set_options", 00:18:30.772 "params": { 00:18:30.772 "action_on_timeout": "none", 00:18:30.772 "timeout_us": 0, 00:18:30.772 "timeout_admin_us": 0, 00:18:30.772 "keep_alive_timeout_ms": 10000, 00:18:30.772 "arbitration_burst": 0, 00:18:30.772 "low_priority_weight": 0, 00:18:30.772 "medium_priority_weight": 0, 00:18:30.772 "high_priority_weight": 0, 00:18:30.772 "nvme_adminq_poll_period_us": 10000, 00:18:30.772 "nvme_ioq_poll_period_us": 0, 00:18:30.772 "io_queue_requests": 0, 00:18:30.772 "delay_cmd_submit": true, 00:18:30.772 "transport_retry_count": 4, 00:18:30.772 "bdev_retry_count": 3, 00:18:30.772 "transport_ack_timeout": 0, 00:18:30.772 "ctrlr_loss_timeout_sec": 0, 00:18:30.772 "reconnect_delay_sec": 0, 00:18:30.772 "fast_io_fail_timeout_sec": 0, 00:18:30.772 "disable_auto_failback": false, 00:18:30.772 "generate_uuids": false, 00:18:30.772 "transport_tos": 0, 00:18:30.772 "nvme_error_stat": false, 00:18:30.772 "rdma_srq_size": 0, 00:18:30.772 "io_path_stat": false, 00:18:30.772 "allow_accel_sequence": false, 00:18:30.772 "rdma_max_cq_size": 0, 00:18:30.772 "rdma_cm_event_timeout_ms": 0, 00:18:30.772 "dhchap_digests": [ 00:18:30.772 "sha256", 00:18:30.772 "sha384", 00:18:30.772 "sha512" 00:18:30.772 ], 00:18:30.772 "dhchap_dhgroups": [ 00:18:30.772 "null", 00:18:30.772 "ffdhe2048", 00:18:30.772 "ffdhe3072", 00:18:30.772 "ffdhe4096", 00:18:30.772 "ffdhe6144", 00:18:30.772 "ffdhe8192" 00:18:30.772 ] 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_nvme_set_hotplug", 00:18:30.772 "params": { 00:18:30.772 "period_us": 100000, 00:18:30.772 "enable": false 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_malloc_create", 00:18:30.772 "params": { 00:18:30.772 "name": "malloc0", 00:18:30.772 "num_blocks": 8192, 00:18:30.772 "block_size": 4096, 00:18:30.772 "physical_block_size": 4096, 00:18:30.772 "uuid": "daff0b61-f929-4d66-9437-18019e130af4", 00:18:30.772 "optimal_io_boundary": 0, 00:18:30.772 "md_size": 0, 00:18:30.772 "dif_type": 0, 00:18:30.772 "dif_is_head_of_md": false, 00:18:30.772 "dif_pi_format": 0 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "bdev_wait_for_examine" 00:18:30.772 } 00:18:30.772 ] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "nbd", 00:18:30.772 "config": [] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "scheduler", 00:18:30.772 "config": [ 00:18:30.772 { 00:18:30.772 "method": "framework_set_scheduler", 00:18:30.772 "params": { 00:18:30.772 "name": "static" 00:18:30.772 } 00:18:30.772 } 00:18:30.772 ] 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "subsystem": "nvmf", 00:18:30.772 "config": [ 00:18:30.772 { 00:18:30.772 "method": "nvmf_set_config", 00:18:30.772 "params": { 00:18:30.772 "discovery_filter": "match_any", 00:18:30.772 "admin_cmd_passthru": { 00:18:30.772 "identify_ctrlr": false 00:18:30.772 }, 00:18:30.772 "dhchap_digests": [ 00:18:30.772 "sha256", 00:18:30.772 "sha384", 00:18:30.772 "sha512" 00:18:30.772 ], 00:18:30.772 "dhchap_dhgroups": [ 00:18:30.772 "null", 00:18:30.772 "ffdhe2048", 00:18:30.772 "ffdhe3072", 00:18:30.772 "ffdhe4096", 00:18:30.772 "ffdhe6144", 00:18:30.772 "ffdhe8192" 00:18:30.772 ] 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "nvmf_set_max_subsystems", 00:18:30.772 "params": { 00:18:30.772 "max_subsystems": 1024 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "nvmf_set_crdt", 00:18:30.772 "params": { 00:18:30.772 "crdt1": 0, 00:18:30.772 "crdt2": 0, 00:18:30.772 "crdt3": 0 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "nvmf_create_transport", 00:18:30.772 "params": { 00:18:30.772 "trtype": "TCP", 00:18:30.772 "max_queue_depth": 128, 00:18:30.772 "max_io_qpairs_per_ctrlr": 127, 00:18:30.772 "in_capsule_data_size": 4096, 00:18:30.772 "max_io_size": 131072, 00:18:30.772 "io_unit_size": 131072, 00:18:30.772 "max_aq_depth": 128, 00:18:30.772 "num_shared_buffers": 511, 00:18:30.772 "buf_cache_size": 4294967295, 00:18:30.772 "dif_insert_or_strip": false, 00:18:30.772 "zcopy": false, 00:18:30.772 "c2h_success": false, 00:18:30.772 "sock_priority": 0, 00:18:30.772 "abort_timeout_sec": 1, 00:18:30.772 "ack_timeout": 0, 00:18:30.772 "data_wr_pool_size": 0 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "nvmf_create_subsystem", 00:18:30.772 "params": { 00:18:30.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.772 "allow_any_host": false, 00:18:30.772 "serial_number": "SPDK00000000000001", 00:18:30.772 "model_number": "SPDK bdev Controller", 00:18:30.772 "max_namespaces": 10, 00:18:30.772 "min_cntlid": 1, 00:18:30.772 "max_cntlid": 65519, 00:18:30.772 "ana_reporting": false 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.772 "method": "nvmf_subsystem_add_host", 00:18:30.772 "params": { 00:18:30.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.772 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.772 "psk": "key0" 00:18:30.772 } 00:18:30.772 }, 00:18:30.772 { 00:18:30.773 "method": "nvmf_subsystem_add_ns", 00:18:30.773 "params": { 00:18:30.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.773 "namespace": { 00:18:30.773 "nsid": 1, 00:18:30.773 "bdev_name": "malloc0", 00:18:30.773 "nguid": "DAFF0B61F9294D66943718019E130AF4", 00:18:30.773 "uuid": "daff0b61-f929-4d66-9437-18019e130af4", 00:18:30.773 "no_auto_visible": false 00:18:30.773 } 00:18:30.773 } 00:18:30.773 }, 00:18:30.773 { 00:18:30.773 "method": "nvmf_subsystem_add_listener", 00:18:30.773 "params": { 00:18:30.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.773 "listen_address": { 00:18:30.773 "trtype": "TCP", 00:18:30.773 "adrfam": "IPv4", 00:18:30.773 "traddr": "10.0.0.2", 00:18:30.773 "trsvcid": "4420" 00:18:30.773 }, 00:18:30.773 "secure_channel": true 00:18:30.773 } 00:18:30.773 } 00:18:30.773 ] 00:18:30.773 } 00:18:30.773 ] 00:18:30.773 }' 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1449336 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1449336 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1449336 ']' 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.773 15:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.773 [2024-12-09 15:11:32.399100] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:30.773 [2024-12-09 15:11:32.399146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.773 [2024-12-09 15:11:32.477079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.773 [2024-12-09 15:11:32.516308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.773 [2024-12-09 15:11:32.516345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.773 [2024-12-09 15:11:32.516352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.773 [2024-12-09 15:11:32.516358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.773 [2024-12-09 15:11:32.516362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.773 [2024-12-09 15:11:32.516923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.032 [2024-12-09 15:11:32.730284] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.032 [2024-12-09 15:11:32.762293] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.032 [2024-12-09 15:11:32.762488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1449371 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1449371 /var/tmp/bdevperf.sock 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1449371 ']' 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.599 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:31.600 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.600 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:31.600 "subsystems": [ 00:18:31.600 { 00:18:31.600 "subsystem": "keyring", 00:18:31.600 "config": [ 00:18:31.600 { 00:18:31.600 "method": "keyring_file_add_key", 00:18:31.600 "params": { 00:18:31.600 "name": "key0", 00:18:31.600 "path": "/tmp/tmp.TbGNrubY87" 00:18:31.600 } 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "iobuf", 00:18:31.600 "config": [ 00:18:31.600 { 00:18:31.600 "method": "iobuf_set_options", 00:18:31.600 "params": { 00:18:31.600 "small_pool_count": 8192, 00:18:31.600 "large_pool_count": 1024, 00:18:31.600 "small_bufsize": 8192, 00:18:31.600 "large_bufsize": 135168, 00:18:31.600 "enable_numa": false 00:18:31.600 } 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "sock", 00:18:31.600 "config": [ 00:18:31.600 { 00:18:31.600 "method": "sock_set_default_impl", 00:18:31.600 "params": { 00:18:31.600 "impl_name": "posix" 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "sock_impl_set_options", 00:18:31.600 "params": { 00:18:31.600 "impl_name": "ssl", 00:18:31.600 "recv_buf_size": 4096, 00:18:31.600 "send_buf_size": 4096, 00:18:31.600 "enable_recv_pipe": true, 00:18:31.600 "enable_quickack": false, 00:18:31.600 "enable_placement_id": 0, 00:18:31.600 "enable_zerocopy_send_server": true, 00:18:31.600 "enable_zerocopy_send_client": false, 00:18:31.600 "zerocopy_threshold": 0, 00:18:31.600 "tls_version": 0, 00:18:31.600 "enable_ktls": false 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "sock_impl_set_options", 00:18:31.600 "params": { 00:18:31.600 "impl_name": "posix", 00:18:31.600 "recv_buf_size": 2097152, 00:18:31.600 "send_buf_size": 2097152, 00:18:31.600 "enable_recv_pipe": true, 00:18:31.600 "enable_quickack": false, 00:18:31.600 "enable_placement_id": 0, 00:18:31.600 "enable_zerocopy_send_server": true, 00:18:31.600 "enable_zerocopy_send_client": false, 00:18:31.600 "zerocopy_threshold": 0, 00:18:31.600 "tls_version": 0, 00:18:31.600 "enable_ktls": false 00:18:31.600 } 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "vmd", 00:18:31.600 "config": [] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "accel", 00:18:31.600 "config": [ 00:18:31.600 { 00:18:31.600 "method": "accel_set_options", 00:18:31.600 "params": { 00:18:31.600 "small_cache_size": 128, 00:18:31.600 "large_cache_size": 16, 00:18:31.600 "task_count": 2048, 00:18:31.600 "sequence_count": 2048, 00:18:31.600 "buf_count": 2048 00:18:31.600 } 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "bdev", 00:18:31.600 "config": [ 00:18:31.600 { 00:18:31.600 "method": "bdev_set_options", 00:18:31.600 "params": { 00:18:31.600 "bdev_io_pool_size": 65535, 00:18:31.600 "bdev_io_cache_size": 256, 00:18:31.600 "bdev_auto_examine": true, 00:18:31.600 "iobuf_small_cache_size": 128, 00:18:31.600 "iobuf_large_cache_size": 16 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_raid_set_options", 00:18:31.600 "params": { 00:18:31.600 "process_window_size_kb": 1024, 00:18:31.600 "process_max_bandwidth_mb_sec": 0 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_iscsi_set_options", 00:18:31.600 "params": { 00:18:31.600 "timeout_sec": 30 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_nvme_set_options", 00:18:31.600 "params": { 00:18:31.600 "action_on_timeout": "none", 00:18:31.600 "timeout_us": 0, 00:18:31.600 "timeout_admin_us": 0, 00:18:31.600 "keep_alive_timeout_ms": 10000, 00:18:31.600 "arbitration_burst": 0, 00:18:31.600 "low_priority_weight": 0, 00:18:31.600 "medium_priority_weight": 0, 00:18:31.600 "high_priority_weight": 0, 00:18:31.600 "nvme_adminq_poll_period_us": 10000, 00:18:31.600 "nvme_ioq_poll_period_us": 0, 00:18:31.600 "io_queue_requests": 512, 00:18:31.600 "delay_cmd_submit": true, 00:18:31.600 "transport_retry_count": 4, 00:18:31.600 "bdev_retry_count": 3, 00:18:31.600 "transport_ack_timeout": 0, 00:18:31.600 "ctrlr_loss_timeout_sec": 0, 00:18:31.600 "reconnect_delay_sec": 0, 00:18:31.600 "fast_io_fail_timeout_sec": 0, 00:18:31.600 "disable_auto_failback": false, 00:18:31.600 "generate_uuids": false, 00:18:31.600 "transport_tos": 0, 00:18:31.600 "nvme_error_stat": false, 00:18:31.600 "rdma_srq_size": 0, 00:18:31.600 "io_path_stat": false, 00:18:31.600 "allow_accel_sequence": false, 00:18:31.600 "rdma_max_cq_size": 0, 00:18:31.600 "rdma_cm_event_timeout_ms": 0 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.600 , 00:18:31.600 "dhchap_digests": [ 00:18:31.600 "sha256", 00:18:31.600 "sha384", 00:18:31.600 "sha512" 00:18:31.600 ], 00:18:31.600 "dhchap_dhgroups": [ 00:18:31.600 "null", 00:18:31.600 "ffdhe2048", 00:18:31.600 "ffdhe3072", 00:18:31.600 "ffdhe4096", 00:18:31.600 "ffdhe6144", 00:18:31.600 "ffdhe8192" 00:18:31.600 ] 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_nvme_attach_controller", 00:18:31.600 "params": { 00:18:31.600 "name": "TLSTEST", 00:18:31.600 "trtype": "TCP", 00:18:31.600 "adrfam": "IPv4", 00:18:31.600 "traddr": "10.0.0.2", 00:18:31.600 "trsvcid": "4420", 00:18:31.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.600 "prchk_reftag": false, 00:18:31.600 "prchk_guard": false, 00:18:31.600 "ctrlr_loss_timeout_sec": 0, 00:18:31.600 "reconnect_delay_sec": 0, 00:18:31.600 "fast_io_fail_timeout_sec": 0, 00:18:31.600 "psk": "key0", 00:18:31.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.600 "hdgst": false, 00:18:31.600 "ddgst": false, 00:18:31.600 "multipath": "multipath" 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_nvme_set_hotplug", 00:18:31.600 "params": { 00:18:31.600 "period_us": 100000, 00:18:31.600 "enable": false 00:18:31.600 } 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "method": "bdev_wait_for_examine" 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }, 00:18:31.600 { 00:18:31.600 "subsystem": "nbd", 00:18:31.600 "config": [] 00:18:31.600 } 00:18:31.600 ] 00:18:31.600 }' 00:18:31.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.600 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.600 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.600 [2024-12-09 15:11:33.313230] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:31.600 [2024-12-09 15:11:33.313277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449371 ] 00:18:31.600 [2024-12-09 15:11:33.389100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.904 [2024-12-09 15:11:33.431889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.904 [2024-12-09 15:11:33.585554] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.552 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.552 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.552 15:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:32.552 Running I/O for 10 seconds... 00:18:34.862 5305.00 IOPS, 20.72 MiB/s [2024-12-09T14:11:37.594Z] 5472.00 IOPS, 21.38 MiB/s [2024-12-09T14:11:38.529Z] 5518.67 IOPS, 21.56 MiB/s [2024-12-09T14:11:39.464Z] 5545.00 IOPS, 21.66 MiB/s [2024-12-09T14:11:40.400Z] 5572.40 IOPS, 21.77 MiB/s [2024-12-09T14:11:41.333Z] 5586.67 IOPS, 21.82 MiB/s [2024-12-09T14:11:42.268Z] 5586.29 IOPS, 21.82 MiB/s [2024-12-09T14:11:43.644Z] 5600.75 IOPS, 21.88 MiB/s [2024-12-09T14:11:44.579Z] 5607.56 IOPS, 21.90 MiB/s [2024-12-09T14:11:44.579Z] 5614.70 IOPS, 21.93 MiB/s 00:18:42.784 Latency(us) 00:18:42.784 [2024-12-09T14:11:44.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.784 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.784 Verification LBA range: start 0x0 length 0x2000 00:18:42.784 TLSTESTn1 : 10.02 5618.06 21.95 0.00 0.00 22747.64 6459.98 27213.04 00:18:42.784 [2024-12-09T14:11:44.579Z] =================================================================================================================== 00:18:42.784 [2024-12-09T14:11:44.579Z] Total : 5618.06 21.95 0.00 0.00 22747.64 6459.98 27213.04 00:18:42.784 { 00:18:42.784 "results": [ 00:18:42.784 { 00:18:42.784 "job": "TLSTESTn1", 00:18:42.784 "core_mask": "0x4", 00:18:42.784 "workload": "verify", 00:18:42.784 "status": "finished", 00:18:42.784 "verify_range": { 00:18:42.784 "start": 0, 00:18:42.784 "length": 8192 00:18:42.784 }, 00:18:42.784 "queue_depth": 128, 00:18:42.784 "io_size": 4096, 00:18:42.784 "runtime": 10.016623, 00:18:42.784 "iops": 5618.061097038393, 00:18:42.784 "mibps": 21.945551160306223, 00:18:42.785 "io_failed": 0, 00:18:42.785 "io_timeout": 0, 00:18:42.785 "avg_latency_us": 22747.639410300282, 00:18:42.785 "min_latency_us": 6459.977142857143, 00:18:42.785 "max_latency_us": 27213.04380952381 00:18:42.785 } 00:18:42.785 ], 00:18:42.785 "core_count": 1 00:18:42.785 } 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1449371 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1449371 ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1449371 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449371 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449371' 00:18:42.785 killing process with pid 1449371 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1449371 00:18:42.785 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.785 00:18:42.785 Latency(us) 00:18:42.785 [2024-12-09T14:11:44.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.785 [2024-12-09T14:11:44.580Z] =================================================================================================================== 00:18:42.785 [2024-12-09T14:11:44.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1449371 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1449336 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1449336 ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1449336 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449336 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449336' 00:18:42.785 killing process with pid 1449336 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1449336 00:18:42.785 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1449336 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1451259 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1451259 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1451259 ']' 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.044 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.044 [2024-12-09 15:11:44.788231] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:43.044 [2024-12-09 15:11:44.788280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.303 [2024-12-09 15:11:44.861081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.303 [2024-12-09 15:11:44.897726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.303 [2024-12-09 15:11:44.897763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.303 [2024-12-09 15:11:44.897770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.303 [2024-12-09 15:11:44.897776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.303 [2024-12-09 15:11:44.897782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.303 [2024-12-09 15:11:44.898304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.303 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.303 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.303 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.TbGNrubY87 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TbGNrubY87 00:18:43.303 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.561 [2024-12-09 15:11:45.201836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.561 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.819 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.819 [2024-12-09 15:11:45.582814] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.819 [2024-12-09 15:11:45.583014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.819 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:44.078 malloc0 00:18:44.078 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.337 15:11:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1451647 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1451647 /var/tmp/bdevperf.sock 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1451647 ']' 00:18:44.595 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.854 [2024-12-09 15:11:46.432789] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:44.854 [2024-12-09 15:11:46.432840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1451647 ] 00:18:44.854 [2024-12-09 15:11:46.505615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.854 [2024-12-09 15:11:46.544783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.854 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:45.112 15:11:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:45.371 [2024-12-09 15:11:47.000902] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.371 nvme0n1 00:18:45.371 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.629 Running I/O for 1 seconds... 00:18:46.565 5368.00 IOPS, 20.97 MiB/s 00:18:46.565 Latency(us) 00:18:46.565 [2024-12-09T14:11:48.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.565 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.565 Verification LBA range: start 0x0 length 0x2000 00:18:46.565 nvme0n1 : 1.01 5428.95 21.21 0.00 0.00 23425.59 4649.94 27837.20 00:18:46.565 [2024-12-09T14:11:48.360Z] =================================================================================================================== 00:18:46.565 [2024-12-09T14:11:48.360Z] Total : 5428.95 21.21 0.00 0.00 23425.59 4649.94 27837.20 00:18:46.565 { 00:18:46.565 "results": [ 00:18:46.565 { 00:18:46.565 "job": "nvme0n1", 00:18:46.565 "core_mask": "0x2", 00:18:46.565 "workload": "verify", 00:18:46.565 "status": "finished", 00:18:46.565 "verify_range": { 00:18:46.565 "start": 0, 00:18:46.565 "length": 8192 00:18:46.565 }, 00:18:46.565 "queue_depth": 128, 00:18:46.565 "io_size": 4096, 00:18:46.565 "runtime": 1.01235, 00:18:46.565 "iops": 5428.952437398133, 00:18:46.565 "mibps": 21.206845458586457, 00:18:46.565 "io_failed": 0, 00:18:46.565 "io_timeout": 0, 00:18:46.565 "avg_latency_us": 23425.585294239965, 00:18:46.565 "min_latency_us": 4649.935238095238, 00:18:46.565 "max_latency_us": 27837.196190476192 00:18:46.565 } 00:18:46.565 ], 00:18:46.565 "core_count": 1 00:18:46.565 } 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1451647 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1451647 ']' 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1451647 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1451647 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1451647' 00:18:46.565 killing process with pid 1451647 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1451647 00:18:46.565 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.565 00:18:46.565 Latency(us) 00:18:46.565 [2024-12-09T14:11:48.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.565 [2024-12-09T14:11:48.360Z] =================================================================================================================== 00:18:46.565 [2024-12-09T14:11:48.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.565 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1451647 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1451259 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1451259 ']' 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1451259 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1451259 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1451259' 00:18:46.824 killing process with pid 1451259 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1451259 00:18:46.824 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1451259 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1451915 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1451915 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1451915 ']' 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.083 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.083 [2024-12-09 15:11:48.699736] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:47.083 [2024-12-09 15:11:48.699784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.083 [2024-12-09 15:11:48.778640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.083 [2024-12-09 15:11:48.812469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.083 [2024-12-09 15:11:48.812504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.083 [2024-12-09 15:11:48.812511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.083 [2024-12-09 15:11:48.812516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.083 [2024-12-09 15:11:48.812521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.083 [2024-12-09 15:11:48.813060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.342 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.342 [2024-12-09 15:11:48.961188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.342 malloc0 00:18:47.342 [2024-12-09 15:11:48.989334] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.342 [2024-12-09 15:11:48.989526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.342 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.342 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1452036 00:18:47.342 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.342 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1452036 /var/tmp/bdevperf.sock 00:18:47.342 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1452036 ']' 00:18:47.343 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.343 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.343 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.343 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.343 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.343 [2024-12-09 15:11:49.067832] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:47.343 [2024-12-09 15:11:49.067877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452036 ] 00:18:47.601 [2024-12-09 15:11:49.143410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.601 [2024-12-09 15:11:49.183557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.601 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.601 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.602 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TbGNrubY87 00:18:47.860 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.860 [2024-12-09 15:11:49.635090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.119 nvme0n1 00:18:48.119 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.119 Running I/O for 1 seconds... 00:18:49.053 5412.00 IOPS, 21.14 MiB/s 00:18:49.053 Latency(us) 00:18:49.053 [2024-12-09T14:11:50.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.053 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.053 Verification LBA range: start 0x0 length 0x2000 00:18:49.053 nvme0n1 : 1.01 5462.97 21.34 0.00 0.00 23268.10 5586.16 25715.08 00:18:49.053 [2024-12-09T14:11:50.848Z] =================================================================================================================== 00:18:49.053 [2024-12-09T14:11:50.848Z] Total : 5462.97 21.34 0.00 0.00 23268.10 5586.16 25715.08 00:18:49.053 { 00:18:49.053 "results": [ 00:18:49.053 { 00:18:49.054 "job": "nvme0n1", 00:18:49.054 "core_mask": "0x2", 00:18:49.054 "workload": "verify", 00:18:49.054 "status": "finished", 00:18:49.054 "verify_range": { 00:18:49.054 "start": 0, 00:18:49.054 "length": 8192 00:18:49.054 }, 00:18:49.054 "queue_depth": 128, 00:18:49.054 "io_size": 4096, 00:18:49.054 "runtime": 1.0141, 00:18:49.054 "iops": 5462.972093481905, 00:18:49.054 "mibps": 21.339734740163692, 00:18:49.054 "io_failed": 0, 00:18:49.054 "io_timeout": 0, 00:18:49.054 "avg_latency_us": 23268.099388688326, 00:18:49.054 "min_latency_us": 5586.1638095238095, 00:18:49.054 "max_latency_us": 25715.078095238096 00:18:49.054 } 00:18:49.054 ], 00:18:49.054 "core_count": 1 00:18:49.054 } 00:18:49.054 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:49.054 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.054 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.313 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.313 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:49.313 "subsystems": [ 00:18:49.313 { 00:18:49.313 "subsystem": "keyring", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "keyring_file_add_key", 00:18:49.313 "params": { 00:18:49.313 "name": "key0", 00:18:49.313 "path": "/tmp/tmp.TbGNrubY87" 00:18:49.313 } 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "iobuf", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "iobuf_set_options", 00:18:49.313 "params": { 00:18:49.313 "small_pool_count": 8192, 00:18:49.313 "large_pool_count": 1024, 00:18:49.313 "small_bufsize": 8192, 00:18:49.313 "large_bufsize": 135168, 00:18:49.313 "enable_numa": false 00:18:49.313 } 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "sock", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "sock_set_default_impl", 00:18:49.313 "params": { 00:18:49.313 "impl_name": "posix" 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "sock_impl_set_options", 00:18:49.313 "params": { 00:18:49.313 "impl_name": "ssl", 00:18:49.313 "recv_buf_size": 4096, 00:18:49.313 "send_buf_size": 4096, 00:18:49.313 "enable_recv_pipe": true, 00:18:49.313 "enable_quickack": false, 00:18:49.313 "enable_placement_id": 0, 00:18:49.313 "enable_zerocopy_send_server": true, 00:18:49.313 "enable_zerocopy_send_client": false, 00:18:49.313 "zerocopy_threshold": 0, 00:18:49.313 "tls_version": 0, 00:18:49.313 "enable_ktls": false 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "sock_impl_set_options", 00:18:49.313 "params": { 00:18:49.313 "impl_name": "posix", 00:18:49.313 "recv_buf_size": 2097152, 00:18:49.313 "send_buf_size": 2097152, 00:18:49.313 "enable_recv_pipe": true, 00:18:49.313 "enable_quickack": false, 00:18:49.313 "enable_placement_id": 0, 00:18:49.313 "enable_zerocopy_send_server": true, 00:18:49.313 "enable_zerocopy_send_client": false, 00:18:49.313 "zerocopy_threshold": 0, 00:18:49.313 "tls_version": 0, 00:18:49.313 "enable_ktls": false 00:18:49.313 } 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "vmd", 00:18:49.313 "config": [] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "accel", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "accel_set_options", 00:18:49.313 "params": { 00:18:49.313 "small_cache_size": 128, 00:18:49.313 "large_cache_size": 16, 00:18:49.313 "task_count": 2048, 00:18:49.313 "sequence_count": 2048, 00:18:49.313 "buf_count": 2048 00:18:49.313 } 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "bdev", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "bdev_set_options", 00:18:49.313 "params": { 00:18:49.313 "bdev_io_pool_size": 65535, 00:18:49.313 "bdev_io_cache_size": 256, 00:18:49.313 "bdev_auto_examine": true, 00:18:49.313 "iobuf_small_cache_size": 128, 00:18:49.313 "iobuf_large_cache_size": 16 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_raid_set_options", 00:18:49.313 "params": { 00:18:49.313 "process_window_size_kb": 1024, 00:18:49.313 "process_max_bandwidth_mb_sec": 0 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_iscsi_set_options", 00:18:49.313 "params": { 00:18:49.313 "timeout_sec": 30 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_nvme_set_options", 00:18:49.313 "params": { 00:18:49.313 "action_on_timeout": "none", 00:18:49.313 "timeout_us": 0, 00:18:49.313 "timeout_admin_us": 0, 00:18:49.313 "keep_alive_timeout_ms": 10000, 00:18:49.313 "arbitration_burst": 0, 00:18:49.313 "low_priority_weight": 0, 00:18:49.313 "medium_priority_weight": 0, 00:18:49.313 "high_priority_weight": 0, 00:18:49.313 "nvme_adminq_poll_period_us": 10000, 00:18:49.313 "nvme_ioq_poll_period_us": 0, 00:18:49.313 "io_queue_requests": 0, 00:18:49.313 "delay_cmd_submit": true, 00:18:49.313 "transport_retry_count": 4, 00:18:49.313 "bdev_retry_count": 3, 00:18:49.313 "transport_ack_timeout": 0, 00:18:49.313 "ctrlr_loss_timeout_sec": 0, 00:18:49.313 "reconnect_delay_sec": 0, 00:18:49.313 "fast_io_fail_timeout_sec": 0, 00:18:49.313 "disable_auto_failback": false, 00:18:49.313 "generate_uuids": false, 00:18:49.313 "transport_tos": 0, 00:18:49.313 "nvme_error_stat": false, 00:18:49.313 "rdma_srq_size": 0, 00:18:49.313 "io_path_stat": false, 00:18:49.313 "allow_accel_sequence": false, 00:18:49.313 "rdma_max_cq_size": 0, 00:18:49.313 "rdma_cm_event_timeout_ms": 0, 00:18:49.313 "dhchap_digests": [ 00:18:49.313 "sha256", 00:18:49.313 "sha384", 00:18:49.313 "sha512" 00:18:49.313 ], 00:18:49.313 "dhchap_dhgroups": [ 00:18:49.313 "null", 00:18:49.313 "ffdhe2048", 00:18:49.313 "ffdhe3072", 00:18:49.313 "ffdhe4096", 00:18:49.313 "ffdhe6144", 00:18:49.313 "ffdhe8192" 00:18:49.313 ] 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_nvme_set_hotplug", 00:18:49.313 "params": { 00:18:49.313 "period_us": 100000, 00:18:49.313 "enable": false 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_malloc_create", 00:18:49.313 "params": { 00:18:49.313 "name": "malloc0", 00:18:49.313 "num_blocks": 8192, 00:18:49.313 "block_size": 4096, 00:18:49.313 "physical_block_size": 4096, 00:18:49.313 "uuid": "d64865cf-0ed7-46ab-a52e-679c538c25da", 00:18:49.313 "optimal_io_boundary": 0, 00:18:49.313 "md_size": 0, 00:18:49.313 "dif_type": 0, 00:18:49.313 "dif_is_head_of_md": false, 00:18:49.313 "dif_pi_format": 0 00:18:49.313 } 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "method": "bdev_wait_for_examine" 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "nbd", 00:18:49.313 "config": [] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "scheduler", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "framework_set_scheduler", 00:18:49.313 "params": { 00:18:49.313 "name": "static" 00:18:49.313 } 00:18:49.313 } 00:18:49.313 ] 00:18:49.313 }, 00:18:49.313 { 00:18:49.313 "subsystem": "nvmf", 00:18:49.313 "config": [ 00:18:49.313 { 00:18:49.313 "method": "nvmf_set_config", 00:18:49.313 "params": { 00:18:49.313 "discovery_filter": "match_any", 00:18:49.313 "admin_cmd_passthru": { 00:18:49.313 "identify_ctrlr": false 00:18:49.313 }, 00:18:49.313 "dhchap_digests": [ 00:18:49.313 "sha256", 00:18:49.313 "sha384", 00:18:49.313 "sha512" 00:18:49.313 ], 00:18:49.314 "dhchap_dhgroups": [ 00:18:49.314 "null", 00:18:49.314 "ffdhe2048", 00:18:49.314 "ffdhe3072", 00:18:49.314 "ffdhe4096", 00:18:49.314 "ffdhe6144", 00:18:49.314 "ffdhe8192" 00:18:49.314 ] 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_set_max_subsystems", 00:18:49.314 "params": { 00:18:49.314 "max_subsystems": 1024 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_set_crdt", 00:18:49.314 "params": { 00:18:49.314 "crdt1": 0, 00:18:49.314 "crdt2": 0, 00:18:49.314 "crdt3": 0 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_create_transport", 00:18:49.314 "params": { 00:18:49.314 "trtype": "TCP", 00:18:49.314 "max_queue_depth": 128, 00:18:49.314 "max_io_qpairs_per_ctrlr": 127, 00:18:49.314 "in_capsule_data_size": 4096, 00:18:49.314 "max_io_size": 131072, 00:18:49.314 "io_unit_size": 131072, 00:18:49.314 "max_aq_depth": 128, 00:18:49.314 "num_shared_buffers": 511, 00:18:49.314 "buf_cache_size": 4294967295, 00:18:49.314 "dif_insert_or_strip": false, 00:18:49.314 "zcopy": false, 00:18:49.314 "c2h_success": false, 00:18:49.314 "sock_priority": 0, 00:18:49.314 "abort_timeout_sec": 1, 00:18:49.314 "ack_timeout": 0, 00:18:49.314 "data_wr_pool_size": 0 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_create_subsystem", 00:18:49.314 "params": { 00:18:49.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.314 "allow_any_host": false, 00:18:49.314 "serial_number": "00000000000000000000", 00:18:49.314 "model_number": "SPDK bdev Controller", 00:18:49.314 "max_namespaces": 32, 00:18:49.314 "min_cntlid": 1, 00:18:49.314 "max_cntlid": 65519, 00:18:49.314 "ana_reporting": false 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_subsystem_add_host", 00:18:49.314 "params": { 00:18:49.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.314 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.314 "psk": "key0" 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_subsystem_add_ns", 00:18:49.314 "params": { 00:18:49.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.314 "namespace": { 00:18:49.314 "nsid": 1, 00:18:49.314 "bdev_name": "malloc0", 00:18:49.314 "nguid": "D64865CF0ED746ABA52E679C538C25DA", 00:18:49.314 "uuid": "d64865cf-0ed7-46ab-a52e-679c538c25da", 00:18:49.314 "no_auto_visible": false 00:18:49.314 } 00:18:49.314 } 00:18:49.314 }, 00:18:49.314 { 00:18:49.314 "method": "nvmf_subsystem_add_listener", 00:18:49.314 "params": { 00:18:49.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.314 "listen_address": { 00:18:49.314 "trtype": "TCP", 00:18:49.314 "adrfam": "IPv4", 00:18:49.314 "traddr": "10.0.0.2", 00:18:49.314 "trsvcid": "4420" 00:18:49.314 }, 00:18:49.314 "secure_channel": false, 00:18:49.314 "sock_impl": "ssl" 00:18:49.314 } 00:18:49.314 } 00:18:49.314 ] 00:18:49.314 } 00:18:49.314 ] 00:18:49.314 }' 00:18:49.314 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:49.573 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:49.573 "subsystems": [ 00:18:49.573 { 00:18:49.573 "subsystem": "keyring", 00:18:49.573 "config": [ 00:18:49.573 { 00:18:49.573 "method": "keyring_file_add_key", 00:18:49.573 "params": { 00:18:49.573 "name": "key0", 00:18:49.573 "path": "/tmp/tmp.TbGNrubY87" 00:18:49.573 } 00:18:49.573 } 00:18:49.573 ] 00:18:49.573 }, 00:18:49.573 { 00:18:49.573 "subsystem": "iobuf", 00:18:49.573 "config": [ 00:18:49.573 { 00:18:49.573 "method": "iobuf_set_options", 00:18:49.573 "params": { 00:18:49.573 "small_pool_count": 8192, 00:18:49.573 "large_pool_count": 1024, 00:18:49.573 "small_bufsize": 8192, 00:18:49.573 "large_bufsize": 135168, 00:18:49.573 "enable_numa": false 00:18:49.573 } 00:18:49.573 } 00:18:49.573 ] 00:18:49.573 }, 00:18:49.573 { 00:18:49.573 "subsystem": "sock", 00:18:49.573 "config": [ 00:18:49.573 { 00:18:49.573 "method": "sock_set_default_impl", 00:18:49.573 "params": { 00:18:49.573 "impl_name": "posix" 00:18:49.573 } 00:18:49.573 }, 00:18:49.573 { 00:18:49.573 "method": "sock_impl_set_options", 00:18:49.573 "params": { 00:18:49.573 "impl_name": "ssl", 00:18:49.573 "recv_buf_size": 4096, 00:18:49.573 "send_buf_size": 4096, 00:18:49.573 "enable_recv_pipe": true, 00:18:49.573 "enable_quickack": false, 00:18:49.573 "enable_placement_id": 0, 00:18:49.573 "enable_zerocopy_send_server": true, 00:18:49.573 "enable_zerocopy_send_client": false, 00:18:49.573 "zerocopy_threshold": 0, 00:18:49.573 "tls_version": 0, 00:18:49.573 "enable_ktls": false 00:18:49.573 } 00:18:49.573 }, 00:18:49.573 { 00:18:49.573 "method": "sock_impl_set_options", 00:18:49.573 "params": { 00:18:49.573 "impl_name": "posix", 00:18:49.573 "recv_buf_size": 2097152, 00:18:49.573 "send_buf_size": 2097152, 00:18:49.573 "enable_recv_pipe": true, 00:18:49.573 "enable_quickack": false, 00:18:49.573 "enable_placement_id": 0, 00:18:49.573 "enable_zerocopy_send_server": true, 00:18:49.573 "enable_zerocopy_send_client": false, 00:18:49.573 "zerocopy_threshold": 0, 00:18:49.573 "tls_version": 0, 00:18:49.573 "enable_ktls": false 00:18:49.573 } 00:18:49.573 } 00:18:49.573 ] 00:18:49.573 }, 00:18:49.573 { 00:18:49.573 "subsystem": "vmd", 00:18:49.573 "config": [] 00:18:49.573 }, 00:18:49.573 { 00:18:49.574 "subsystem": "accel", 00:18:49.574 "config": [ 00:18:49.574 { 00:18:49.574 "method": "accel_set_options", 00:18:49.574 "params": { 00:18:49.574 "small_cache_size": 128, 00:18:49.574 "large_cache_size": 16, 00:18:49.574 "task_count": 2048, 00:18:49.574 "sequence_count": 2048, 00:18:49.574 "buf_count": 2048 00:18:49.574 } 00:18:49.574 } 00:18:49.574 ] 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "subsystem": "bdev", 00:18:49.574 "config": [ 00:18:49.574 { 00:18:49.574 "method": "bdev_set_options", 00:18:49.574 "params": { 00:18:49.574 "bdev_io_pool_size": 65535, 00:18:49.574 "bdev_io_cache_size": 256, 00:18:49.574 "bdev_auto_examine": true, 00:18:49.574 "iobuf_small_cache_size": 128, 00:18:49.574 "iobuf_large_cache_size": 16 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_raid_set_options", 00:18:49.574 "params": { 00:18:49.574 "process_window_size_kb": 1024, 00:18:49.574 "process_max_bandwidth_mb_sec": 0 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_iscsi_set_options", 00:18:49.574 "params": { 00:18:49.574 "timeout_sec": 30 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_nvme_set_options", 00:18:49.574 "params": { 00:18:49.574 "action_on_timeout": "none", 00:18:49.574 "timeout_us": 0, 00:18:49.574 "timeout_admin_us": 0, 00:18:49.574 "keep_alive_timeout_ms": 10000, 00:18:49.574 "arbitration_burst": 0, 00:18:49.574 "low_priority_weight": 0, 00:18:49.574 "medium_priority_weight": 0, 00:18:49.574 "high_priority_weight": 0, 00:18:49.574 "nvme_adminq_poll_period_us": 10000, 00:18:49.574 "nvme_ioq_poll_period_us": 0, 00:18:49.574 "io_queue_requests": 512, 00:18:49.574 "delay_cmd_submit": true, 00:18:49.574 "transport_retry_count": 4, 00:18:49.574 "bdev_retry_count": 3, 00:18:49.574 "transport_ack_timeout": 0, 00:18:49.574 "ctrlr_loss_timeout_sec": 0, 00:18:49.574 "reconnect_delay_sec": 0, 00:18:49.574 "fast_io_fail_timeout_sec": 0, 00:18:49.574 "disable_auto_failback": false, 00:18:49.574 "generate_uuids": false, 00:18:49.574 "transport_tos": 0, 00:18:49.574 "nvme_error_stat": false, 00:18:49.574 "rdma_srq_size": 0, 00:18:49.574 "io_path_stat": false, 00:18:49.574 "allow_accel_sequence": false, 00:18:49.574 "rdma_max_cq_size": 0, 00:18:49.574 "rdma_cm_event_timeout_ms": 0, 00:18:49.574 "dhchap_digests": [ 00:18:49.574 "sha256", 00:18:49.574 "sha384", 00:18:49.574 "sha512" 00:18:49.574 ], 00:18:49.574 "dhchap_dhgroups": [ 00:18:49.574 "null", 00:18:49.574 "ffdhe2048", 00:18:49.574 "ffdhe3072", 00:18:49.574 "ffdhe4096", 00:18:49.574 "ffdhe6144", 00:18:49.574 "ffdhe8192" 00:18:49.574 ] 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_nvme_attach_controller", 00:18:49.574 "params": { 00:18:49.574 "name": "nvme0", 00:18:49.574 "trtype": "TCP", 00:18:49.574 "adrfam": "IPv4", 00:18:49.574 "traddr": "10.0.0.2", 00:18:49.574 "trsvcid": "4420", 00:18:49.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.574 "prchk_reftag": false, 00:18:49.574 "prchk_guard": false, 00:18:49.574 "ctrlr_loss_timeout_sec": 0, 00:18:49.574 "reconnect_delay_sec": 0, 00:18:49.574 "fast_io_fail_timeout_sec": 0, 00:18:49.574 "psk": "key0", 00:18:49.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.574 "hdgst": false, 00:18:49.574 "ddgst": false, 00:18:49.574 "multipath": "multipath" 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_nvme_set_hotplug", 00:18:49.574 "params": { 00:18:49.574 "period_us": 100000, 00:18:49.574 "enable": false 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_enable_histogram", 00:18:49.574 "params": { 00:18:49.574 "name": "nvme0n1", 00:18:49.574 "enable": true 00:18:49.574 } 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "method": "bdev_wait_for_examine" 00:18:49.574 } 00:18:49.574 ] 00:18:49.574 }, 00:18:49.574 { 00:18:49.574 "subsystem": "nbd", 00:18:49.574 "config": [] 00:18:49.574 } 00:18:49.574 ] 00:18:49.574 }' 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1452036 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1452036 ']' 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1452036 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452036 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452036' 00:18:49.574 killing process with pid 1452036 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1452036 00:18:49.574 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.574 00:18:49.574 Latency(us) 00:18:49.574 [2024-12-09T14:11:51.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.574 [2024-12-09T14:11:51.369Z] =================================================================================================================== 00:18:49.574 [2024-12-09T14:11:51.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.574 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1452036 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1451915 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1451915 ']' 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1451915 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1451915 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1451915' 00:18:49.834 killing process with pid 1451915 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1451915 00:18:49.834 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1451915 00:18:50.093 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:50.093 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.093 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.093 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:50.093 "subsystems": [ 00:18:50.093 { 00:18:50.093 "subsystem": "keyring", 00:18:50.093 "config": [ 00:18:50.093 { 00:18:50.093 "method": "keyring_file_add_key", 00:18:50.093 "params": { 00:18:50.093 "name": "key0", 00:18:50.093 "path": "/tmp/tmp.TbGNrubY87" 00:18:50.093 } 00:18:50.093 } 00:18:50.093 ] 00:18:50.093 }, 00:18:50.093 { 00:18:50.093 "subsystem": "iobuf", 00:18:50.093 "config": [ 00:18:50.093 { 00:18:50.093 "method": "iobuf_set_options", 00:18:50.093 "params": { 00:18:50.093 "small_pool_count": 8192, 00:18:50.093 "large_pool_count": 1024, 00:18:50.093 "small_bufsize": 8192, 00:18:50.093 "large_bufsize": 135168, 00:18:50.093 "enable_numa": false 00:18:50.093 } 00:18:50.093 } 00:18:50.093 ] 00:18:50.093 }, 00:18:50.093 { 00:18:50.093 "subsystem": "sock", 00:18:50.093 "config": [ 00:18:50.093 { 00:18:50.093 "method": "sock_set_default_impl", 00:18:50.093 "params": { 00:18:50.093 "impl_name": "posix" 00:18:50.093 } 00:18:50.093 }, 00:18:50.093 { 00:18:50.093 "method": "sock_impl_set_options", 00:18:50.093 "params": { 00:18:50.093 "impl_name": "ssl", 00:18:50.093 "recv_buf_size": 4096, 00:18:50.093 "send_buf_size": 4096, 00:18:50.093 "enable_recv_pipe": true, 00:18:50.093 "enable_quickack": false, 00:18:50.093 "enable_placement_id": 0, 00:18:50.093 "enable_zerocopy_send_server": true, 00:18:50.093 "enable_zerocopy_send_client": false, 00:18:50.093 "zerocopy_threshold": 0, 00:18:50.093 "tls_version": 0, 00:18:50.093 "enable_ktls": false 00:18:50.093 } 00:18:50.093 }, 00:18:50.093 { 00:18:50.093 "method": "sock_impl_set_options", 00:18:50.093 "params": { 00:18:50.093 "impl_name": "posix", 00:18:50.093 "recv_buf_size": 2097152, 00:18:50.093 "send_buf_size": 2097152, 00:18:50.093 "enable_recv_pipe": true, 00:18:50.093 "enable_quickack": false, 00:18:50.093 "enable_placement_id": 0, 00:18:50.093 "enable_zerocopy_send_server": true, 00:18:50.093 "enable_zerocopy_send_client": false, 00:18:50.093 "zerocopy_threshold": 0, 00:18:50.093 "tls_version": 0, 00:18:50.093 "enable_ktls": false 00:18:50.093 } 00:18:50.093 } 00:18:50.093 ] 00:18:50.093 }, 00:18:50.093 { 00:18:50.093 "subsystem": "vmd", 00:18:50.093 "config": [] 00:18:50.093 }, 00:18:50.093 { 00:18:50.094 "subsystem": "accel", 00:18:50.094 "config": [ 00:18:50.094 { 00:18:50.094 "method": "accel_set_options", 00:18:50.094 "params": { 00:18:50.094 "small_cache_size": 128, 00:18:50.094 "large_cache_size": 16, 00:18:50.094 "task_count": 2048, 00:18:50.094 "sequence_count": 2048, 00:18:50.094 "buf_count": 2048 00:18:50.094 } 00:18:50.094 } 00:18:50.094 ] 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "subsystem": "bdev", 00:18:50.094 "config": [ 00:18:50.094 { 00:18:50.094 "method": "bdev_set_options", 00:18:50.094 "params": { 00:18:50.094 "bdev_io_pool_size": 65535, 00:18:50.094 "bdev_io_cache_size": 256, 00:18:50.094 "bdev_auto_examine": true, 00:18:50.094 "iobuf_small_cache_size": 128, 00:18:50.094 "iobuf_large_cache_size": 16 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_raid_set_options", 00:18:50.094 "params": { 00:18:50.094 "process_window_size_kb": 1024, 00:18:50.094 "process_max_bandwidth_mb_sec": 0 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_iscsi_set_options", 00:18:50.094 "params": { 00:18:50.094 "timeout_sec": 30 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_nvme_set_options", 00:18:50.094 "params": { 00:18:50.094 "action_on_timeout": "none", 00:18:50.094 "timeout_us": 0, 00:18:50.094 "timeout_admin_us": 0, 00:18:50.094 "keep_alive_timeout_ms": 10000, 00:18:50.094 "arbitration_burst": 0, 00:18:50.094 "low_priority_weight": 0, 00:18:50.094 "medium_priority_weight": 0, 00:18:50.094 "high_priority_weight": 0, 00:18:50.094 "nvme_adminq_poll_period_us": 10000, 00:18:50.094 "nvme_ioq_poll_period_us": 0, 00:18:50.094 "io_queue_requests": 0, 00:18:50.094 "delay_cmd_submit": true, 00:18:50.094 "transport_retry_count": 4, 00:18:50.094 "bdev_retry_count": 3, 00:18:50.094 "transport_ack_timeout": 0, 00:18:50.094 "ctrlr_loss_timeout_sec": 0, 00:18:50.094 "reconnect_delay_sec": 0, 00:18:50.094 "fast_io_fail_timeout_sec": 0, 00:18:50.094 "disable_auto_failback": false, 00:18:50.094 "generate_uuids": false, 00:18:50.094 "transport_tos": 0, 00:18:50.094 "nvme_error_stat": false, 00:18:50.094 "rdma_srq_size": 0, 00:18:50.094 "io_path_stat": false, 00:18:50.094 "allow_accel_sequence": false, 00:18:50.094 "rdma_max_cq_size": 0, 00:18:50.094 "rdma_cm_event_timeout_ms": 0, 00:18:50.094 "dhchap_digests": [ 00:18:50.094 "sha256", 00:18:50.094 "sha384", 00:18:50.094 "sha512" 00:18:50.094 ], 00:18:50.094 "dhchap_dhgroups": [ 00:18:50.094 "null", 00:18:50.094 "ffdhe2048", 00:18:50.094 "ffdhe3072", 00:18:50.094 "ffdhe4096", 00:18:50.094 "ffdhe6144", 00:18:50.094 "ffdhe8192" 00:18:50.094 ] 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_nvme_set_hotplug", 00:18:50.094 "params": { 00:18:50.094 "period_us": 100000, 00:18:50.094 "enable": false 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_malloc_create", 00:18:50.094 "params": { 00:18:50.094 "name": "malloc0", 00:18:50.094 "num_blocks": 8192, 00:18:50.094 "block_size": 4096, 00:18:50.094 "physical_block_size": 4096, 00:18:50.094 "uuid": "d64865cf-0ed7-46ab-a52e-679c538c25da", 00:18:50.094 "optimal_io_boundary": 0, 00:18:50.094 "md_size": 0, 00:18:50.094 "dif_type": 0, 00:18:50.094 "dif_is_head_of_md": false, 00:18:50.094 "dif_pi_format": 0 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "bdev_wait_for_examine" 00:18:50.094 } 00:18:50.094 ] 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "subsystem": "nbd", 00:18:50.094 "config": [] 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "subsystem": "scheduler", 00:18:50.094 "config": [ 00:18:50.094 { 00:18:50.094 "method": "framework_set_scheduler", 00:18:50.094 "params": { 00:18:50.094 "name": "static" 00:18:50.094 } 00:18:50.094 } 00:18:50.094 ] 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "subsystem": "nvmf", 00:18:50.094 "config": [ 00:18:50.094 { 00:18:50.094 "method": "nvmf_set_config", 00:18:50.094 "params": { 00:18:50.094 "discovery_filter": "match_any", 00:18:50.094 "admin_cmd_passthru": { 00:18:50.094 "identify_ctrlr": false 00:18:50.094 }, 00:18:50.094 "dhchap_digests": [ 00:18:50.094 "sha256", 00:18:50.094 "sha384", 00:18:50.094 "sha512" 00:18:50.094 ], 00:18:50.094 "dhchap_dhgroups": [ 00:18:50.094 "null", 00:18:50.094 "ffdhe2048", 00:18:50.094 "ffdhe3072", 00:18:50.094 "ffdhe4096", 00:18:50.094 "ffdhe6144", 00:18:50.094 "ffdhe8192" 00:18:50.094 ] 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_set_max_subsystems", 00:18:50.094 "params": { 00:18:50.094 "max_subsystems": 1024 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_set_crdt", 00:18:50.094 "params": { 00:18:50.094 "crdt1": 0, 00:18:50.094 "crdt2": 0, 00:18:50.094 "crdt3": 0 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_create_transport", 00:18:50.094 "params": { 00:18:50.094 "trtype": "TCP", 00:18:50.094 "max_queue_depth": 128, 00:18:50.094 "max_io_qpairs_per_ctrlr": 127, 00:18:50.094 "in_capsule_data_size": 4096, 00:18:50.094 "max_io_size": 131072, 00:18:50.094 "io_unit_size": 131072, 00:18:50.094 "max_aq_depth": 128, 00:18:50.094 "num_shared_buffers": 511, 00:18:50.094 "buf_cache_size": 4294967295, 00:18:50.094 "dif_insert_or_strip": false, 00:18:50.094 "zcopy": false, 00:18:50.094 "c2h_success": false, 00:18:50.094 "sock_priority": 0, 00:18:50.094 "abort_timeout_sec": 1, 00:18:50.094 "ack_timeout": 0, 00:18:50.094 "data_wr_pool_size": 0 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_create_subsystem", 00:18:50.094 "params": { 00:18:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.094 "allow_any_host": false, 00:18:50.094 "serial_number": "00000000000000000000", 00:18:50.094 "model_number": "SPDK bdev Controller", 00:18:50.094 "max_namespaces": 32, 00:18:50.094 "min_cntlid": 1, 00:18:50.094 "max_cntlid": 65519, 00:18:50.094 "ana_reporting": false 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_subsystem_add_host", 00:18:50.094 "params": { 00:18:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.094 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.094 "psk": "key0" 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_subsystem_add_ns", 00:18:50.094 "params": { 00:18:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.094 "namespace": { 00:18:50.094 "nsid": 1, 00:18:50.094 "bdev_name": "malloc0", 00:18:50.094 "nguid": "D64865CF0ED746ABA52E679C538C25DA", 00:18:50.094 "uuid": "d64865cf-0ed7-46ab-a52e-679c538c25da", 00:18:50.094 "no_auto_visible": false 00:18:50.094 } 00:18:50.094 } 00:18:50.094 }, 00:18:50.094 { 00:18:50.094 "method": "nvmf_subsystem_add_listener", 00:18:50.094 "params": { 00:18:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.094 "listen_address": { 00:18:50.094 "trtype": "TCP", 00:18:50.094 "adrfam": "IPv4", 00:18:50.094 "traddr": "10.0.0.2", 00:18:50.094 "trsvcid": "4420" 00:18:50.094 }, 00:18:50.094 "secure_channel": false, 00:18:50.094 "sock_impl": "ssl" 00:18:50.094 } 00:18:50.094 } 00:18:50.094 ] 00:18:50.094 } 00:18:50.094 ] 00:18:50.094 }' 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1452409 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1452409 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1452409 ']' 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.094 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.094 [2024-12-09 15:11:51.686111] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:50.094 [2024-12-09 15:11:51.686160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.094 [2024-12-09 15:11:51.762558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.094 [2024-12-09 15:11:51.801549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.094 [2024-12-09 15:11:51.801586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.094 [2024-12-09 15:11:51.801594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.094 [2024-12-09 15:11:51.801601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.094 [2024-12-09 15:11:51.801607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.094 [2024-12-09 15:11:51.802160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.354 [2024-12-09 15:11:52.015517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.354 [2024-12-09 15:11:52.047549] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.354 [2024-12-09 15:11:52.047740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1452640 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1452640 /var/tmp/bdevperf.sock 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1452640 ']' 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.921 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:50.921 "subsystems": [ 00:18:50.921 { 00:18:50.921 "subsystem": "keyring", 00:18:50.921 "config": [ 00:18:50.921 { 00:18:50.921 "method": "keyring_file_add_key", 00:18:50.921 "params": { 00:18:50.921 "name": "key0", 00:18:50.921 "path": "/tmp/tmp.TbGNrubY87" 00:18:50.921 } 00:18:50.921 } 00:18:50.921 ] 00:18:50.921 }, 00:18:50.921 { 00:18:50.921 "subsystem": "iobuf", 00:18:50.921 "config": [ 00:18:50.921 { 00:18:50.921 "method": "iobuf_set_options", 00:18:50.921 "params": { 00:18:50.921 "small_pool_count": 8192, 00:18:50.921 "large_pool_count": 1024, 00:18:50.921 "small_bufsize": 8192, 00:18:50.921 "large_bufsize": 135168, 00:18:50.921 "enable_numa": false 00:18:50.921 } 00:18:50.921 } 00:18:50.921 ] 00:18:50.921 }, 00:18:50.921 { 00:18:50.921 "subsystem": "sock", 00:18:50.921 "config": [ 00:18:50.921 { 00:18:50.921 "method": "sock_set_default_impl", 00:18:50.921 "params": { 00:18:50.921 "impl_name": "posix" 00:18:50.921 } 00:18:50.921 }, 00:18:50.921 { 00:18:50.921 "method": "sock_impl_set_options", 00:18:50.921 "params": { 00:18:50.921 "impl_name": "ssl", 00:18:50.921 "recv_buf_size": 4096, 00:18:50.921 "send_buf_size": 4096, 00:18:50.921 "enable_recv_pipe": true, 00:18:50.921 "enable_quickack": false, 00:18:50.921 "enable_placement_id": 0, 00:18:50.921 "enable_zerocopy_send_server": true, 00:18:50.921 "enable_zerocopy_send_client": false, 00:18:50.922 "zerocopy_threshold": 0, 00:18:50.922 "tls_version": 0, 00:18:50.922 "enable_ktls": false 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "sock_impl_set_options", 00:18:50.922 "params": { 00:18:50.922 "impl_name": "posix", 00:18:50.922 "recv_buf_size": 2097152, 00:18:50.922 "send_buf_size": 2097152, 00:18:50.922 "enable_recv_pipe": true, 00:18:50.922 "enable_quickack": false, 00:18:50.922 "enable_placement_id": 0, 00:18:50.922 "enable_zerocopy_send_server": true, 00:18:50.922 "enable_zerocopy_send_client": false, 00:18:50.922 "zerocopy_threshold": 0, 00:18:50.922 "tls_version": 0, 00:18:50.922 "enable_ktls": false 00:18:50.922 } 00:18:50.922 } 00:18:50.922 ] 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "subsystem": "vmd", 00:18:50.922 "config": [] 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "subsystem": "accel", 00:18:50.922 "config": [ 00:18:50.922 { 00:18:50.922 "method": "accel_set_options", 00:18:50.922 "params": { 00:18:50.922 "small_cache_size": 128, 00:18:50.922 "large_cache_size": 16, 00:18:50.922 "task_count": 2048, 00:18:50.922 "sequence_count": 2048, 00:18:50.922 "buf_count": 2048 00:18:50.922 } 00:18:50.922 } 00:18:50.922 ] 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "subsystem": "bdev", 00:18:50.922 "config": [ 00:18:50.922 { 00:18:50.922 "method": "bdev_set_options", 00:18:50.922 "params": { 00:18:50.922 "bdev_io_pool_size": 65535, 00:18:50.922 "bdev_io_cache_size": 256, 00:18:50.922 "bdev_auto_examine": true, 00:18:50.922 "iobuf_small_cache_size": 128, 00:18:50.922 "iobuf_large_cache_size": 16 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_raid_set_options", 00:18:50.922 "params": { 00:18:50.922 "process_window_size_kb": 1024, 00:18:50.922 "process_max_bandwidth_mb_sec": 0 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_iscsi_set_options", 00:18:50.922 "params": { 00:18:50.922 "timeout_sec": 30 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_nvme_set_options", 00:18:50.922 "params": { 00:18:50.922 "action_on_timeout": "none", 00:18:50.922 "timeout_us": 0, 00:18:50.922 "timeout_admin_us": 0, 00:18:50.922 "keep_alive_timeout_ms": 10000, 00:18:50.922 "arbitration_burst": 0, 00:18:50.922 "low_priority_weight": 0, 00:18:50.922 "medium_priority_weight": 0, 00:18:50.922 "high_priority_weight": 0, 00:18:50.922 "nvme_adminq_poll_period_us": 10000, 00:18:50.922 "nvme_ioq_poll_period_us": 0, 00:18:50.922 "io_queue_requests": 512, 00:18:50.922 "delay_cmd_submit": true, 00:18:50.922 "transport_retry_count": 4, 00:18:50.922 "bdev_retry_count": 3, 00:18:50.922 "transport_ack_timeout": 0, 00:18:50.922 "ctrlr_loss_timeout_sec": 0, 00:18:50.922 "reconnect_delay_sec": 0, 00:18:50.922 "fast_io_fail_timeout_sec": 0, 00:18:50.922 "disable_auto_failback": false, 00:18:50.922 "generate_uuids": false, 00:18:50.922 "transport_tos": 0, 00:18:50.922 "nvme_error_stat": false, 00:18:50.922 "rdma_srq_size": 0, 00:18:50.922 "io_path_stat": false, 00:18:50.922 "allow_accel_sequence": false, 00:18:50.922 "rdma_max_cq_size": 0, 00:18:50.922 "rdma_cm_event_timeout_ms": 0, 00:18:50.922 "dhchap_digests": [ 00:18:50.922 "sha256", 00:18:50.922 "sha384", 00:18:50.922 "sha512" 00:18:50.922 ], 00:18:50.922 "dhchap_dhgroups": [ 00:18:50.922 "null", 00:18:50.922 "ffdhe2048", 00:18:50.922 "ffdhe3072", 00:18:50.922 "ffdhe4096", 00:18:50.922 "ffdhe6144", 00:18:50.922 "ffdhe8192" 00:18:50.922 ] 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_nvme_attach_controller", 00:18:50.922 "params": { 00:18:50.922 "name": "nvme0", 00:18:50.922 "trtype": "TCP", 00:18:50.922 "adrfam": "IPv4", 00:18:50.922 "traddr": "10.0.0.2", 00:18:50.922 "trsvcid": "4420", 00:18:50.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.922 "prchk_reftag": false, 00:18:50.922 "prchk_guard": false, 00:18:50.922 "ctrlr_loss_timeout_sec": 0, 00:18:50.922 "reconnect_delay_sec": 0, 00:18:50.922 "fast_io_fail_timeout_sec": 0, 00:18:50.922 "psk": "key0", 00:18:50.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.922 "hdgst": false, 00:18:50.922 "ddgst": false, 00:18:50.922 "multipath": "multipath" 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_nvme_set_hotplug", 00:18:50.922 "params": { 00:18:50.922 "period_us": 100000, 00:18:50.922 "enable": false 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_enable_histogram", 00:18:50.922 "params": { 00:18:50.922 "name": "nvme0n1", 00:18:50.922 "enable": true 00:18:50.922 } 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "method": "bdev_wait_for_examine" 00:18:50.922 } 00:18:50.922 ] 00:18:50.922 }, 00:18:50.922 { 00:18:50.922 "subsystem": "nbd", 00:18:50.922 "config": [] 00:18:50.922 } 00:18:50.922 ] 00:18:50.922 }' 00:18:50.922 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.922 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.922 [2024-12-09 15:11:52.599791] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:18:50.922 [2024-12-09 15:11:52.599839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1452640 ] 00:18:50.922 [2024-12-09 15:11:52.673965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.922 [2024-12-09 15:11:52.712821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.182 [2024-12-09 15:11:52.865160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.749 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.749 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.749 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.749 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:52.008 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.008 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.008 Running I/O for 1 seconds... 00:18:53.387 5483.00 IOPS, 21.42 MiB/s 00:18:53.387 Latency(us) 00:18:53.387 [2024-12-09T14:11:55.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.387 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.387 Verification LBA range: start 0x0 length 0x2000 00:18:53.387 nvme0n1 : 1.01 5539.42 21.64 0.00 0.00 22953.37 5492.54 24716.43 00:18:53.387 [2024-12-09T14:11:55.182Z] =================================================================================================================== 00:18:53.387 [2024-12-09T14:11:55.182Z] Total : 5539.42 21.64 0.00 0.00 22953.37 5492.54 24716.43 00:18:53.387 { 00:18:53.387 "results": [ 00:18:53.387 { 00:18:53.387 "job": "nvme0n1", 00:18:53.387 "core_mask": "0x2", 00:18:53.387 "workload": "verify", 00:18:53.387 "status": "finished", 00:18:53.387 "verify_range": { 00:18:53.387 "start": 0, 00:18:53.387 "length": 8192 00:18:53.387 }, 00:18:53.387 "queue_depth": 128, 00:18:53.387 "io_size": 4096, 00:18:53.387 "runtime": 1.012922, 00:18:53.387 "iops": 5539.419619674565, 00:18:53.387 "mibps": 21.63835788935377, 00:18:53.387 "io_failed": 0, 00:18:53.387 "io_timeout": 0, 00:18:53.387 "avg_latency_us": 22953.37066544458, 00:18:53.387 "min_latency_us": 5492.540952380952, 00:18:53.387 "max_latency_us": 24716.434285714287 00:18:53.387 } 00:18:53.387 ], 00:18:53.387 "core_count": 1 00:18:53.387 } 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.387 nvmf_trace.0 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1452640 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1452640 ']' 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1452640 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452640 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452640' 00:18:53.387 killing process with pid 1452640 00:18:53.387 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1452640 00:18:53.387 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.388 00:18:53.388 Latency(us) 00:18:53.388 [2024-12-09T14:11:55.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.388 [2024-12-09T14:11:55.183Z] =================================================================================================================== 00:18:53.388 [2024-12-09T14:11:55.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.388 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1452640 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.388 rmmod nvme_tcp 00:18:53.388 rmmod nvme_fabrics 00:18:53.388 rmmod nvme_keyring 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1452409 ']' 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1452409 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1452409 ']' 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1452409 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.388 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452409 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452409' 00:18:53.648 killing process with pid 1452409 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1452409 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1452409 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.648 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.h6MLBUCuZx /tmp/tmp.qbbHR4JqCQ /tmp/tmp.TbGNrubY87 00:18:56.185 00:18:56.185 real 1m19.514s 00:18:56.185 user 2m0.679s 00:18:56.185 sys 0m31.548s 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 ************************************ 00:18:56.185 END TEST nvmf_tls 00:18:56.185 ************************************ 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 ************************************ 00:18:56.185 START TEST nvmf_fips 00:18:56.185 ************************************ 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.185 * Looking for test storage... 00:18:56.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.185 --rc genhtml_branch_coverage=1 00:18:56.185 --rc genhtml_function_coverage=1 00:18:56.185 --rc genhtml_legend=1 00:18:56.185 --rc geninfo_all_blocks=1 00:18:56.185 --rc geninfo_unexecuted_blocks=1 00:18:56.185 00:18:56.185 ' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.185 --rc genhtml_branch_coverage=1 00:18:56.185 --rc genhtml_function_coverage=1 00:18:56.185 --rc genhtml_legend=1 00:18:56.185 --rc geninfo_all_blocks=1 00:18:56.185 --rc geninfo_unexecuted_blocks=1 00:18:56.185 00:18:56.185 ' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.185 --rc genhtml_branch_coverage=1 00:18:56.185 --rc genhtml_function_coverage=1 00:18:56.185 --rc genhtml_legend=1 00:18:56.185 --rc geninfo_all_blocks=1 00:18:56.185 --rc geninfo_unexecuted_blocks=1 00:18:56.185 00:18:56.185 ' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.185 --rc genhtml_branch_coverage=1 00:18:56.185 --rc genhtml_function_coverage=1 00:18:56.185 --rc genhtml_legend=1 00:18:56.185 --rc geninfo_all_blocks=1 00:18:56.185 --rc geninfo_unexecuted_blocks=1 00:18:56.185 00:18:56.185 ' 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.185 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:56.186 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:56.187 Error setting digest 00:18:56.187 406288BE2E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:56.187 406288BE2E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.187 15:11:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:02.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:02.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.758 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:02.759 Found net devices under 0000:af:00.0: cvl_0_0 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:02.759 Found net devices under 0000:af:00.1: cvl_0_1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:19:02.759 00:19:02.759 --- 10.0.0.2 ping statistics --- 00:19:02.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.759 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:02.759 00:19:02.759 --- 10.0.0.1 ping statistics --- 00:19:02.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.759 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1456756 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1456756 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1456756 ']' 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.759 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.759 [2024-12-09 15:12:03.853674] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:02.759 [2024-12-09 15:12:03.853718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.759 [2024-12-09 15:12:03.931866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.759 [2024-12-09 15:12:03.972182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.759 [2024-12-09 15:12:03.972225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.759 [2024-12-09 15:12:03.972232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.759 [2024-12-09 15:12:03.972238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.759 [2024-12-09 15:12:03.972259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.759 [2024-12-09 15:12:03.972760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2fe 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2fe 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2fe 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2fe 00:19:03.019 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.277 [2024-12-09 15:12:04.913965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.277 [2024-12-09 15:12:04.929963] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.277 [2024-12-09 15:12:04.930166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.277 malloc0 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1457006 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1457006 /var/tmp/bdevperf.sock 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1457006 ']' 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.277 15:12:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:03.277 [2024-12-09 15:12:05.060414] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:03.277 [2024-12-09 15:12:05.060464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457006 ] 00:19:03.534 [2024-12-09 15:12:05.136430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.534 [2024-12-09 15:12:05.176156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.099 15:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.099 15:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:04.099 15:12:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2fe 00:19:04.356 15:12:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.615 [2024-12-09 15:12:06.254487] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.615 TLSTESTn1 00:19:04.615 15:12:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.872 Running I/O for 10 seconds... 00:19:06.742 5437.00 IOPS, 21.24 MiB/s [2024-12-09T14:12:09.471Z] 5583.50 IOPS, 21.81 MiB/s [2024-12-09T14:12:10.846Z] 5604.33 IOPS, 21.89 MiB/s [2024-12-09T14:12:11.803Z] 5585.25 IOPS, 21.82 MiB/s [2024-12-09T14:12:12.736Z] 5481.20 IOPS, 21.41 MiB/s [2024-12-09T14:12:13.669Z] 5499.67 IOPS, 21.48 MiB/s [2024-12-09T14:12:14.604Z] 5506.43 IOPS, 21.51 MiB/s [2024-12-09T14:12:15.540Z] 5517.12 IOPS, 21.55 MiB/s [2024-12-09T14:12:16.476Z] 5535.56 IOPS, 21.62 MiB/s [2024-12-09T14:12:16.735Z] 5538.00 IOPS, 21.63 MiB/s 00:19:14.940 Latency(us) 00:19:14.940 [2024-12-09T14:12:16.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.940 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.940 Verification LBA range: start 0x0 length 0x2000 00:19:14.940 TLSTESTn1 : 10.02 5541.30 21.65 0.00 0.00 23064.22 6241.52 23218.47 00:19:14.940 [2024-12-09T14:12:16.735Z] =================================================================================================================== 00:19:14.940 [2024-12-09T14:12:16.735Z] Total : 5541.30 21.65 0.00 0.00 23064.22 6241.52 23218.47 00:19:14.940 { 00:19:14.940 "results": [ 00:19:14.940 { 00:19:14.940 "job": "TLSTESTn1", 00:19:14.940 "core_mask": "0x4", 00:19:14.940 "workload": "verify", 00:19:14.940 "status": "finished", 00:19:14.940 "verify_range": { 00:19:14.940 "start": 0, 00:19:14.940 "length": 8192 00:19:14.940 }, 00:19:14.940 "queue_depth": 128, 00:19:14.940 "io_size": 4096, 00:19:14.940 "runtime": 10.016608, 00:19:14.940 "iops": 5541.297013919283, 00:19:14.940 "mibps": 21.6456914606222, 00:19:14.940 "io_failed": 0, 00:19:14.940 "io_timeout": 0, 00:19:14.940 "avg_latency_us": 23064.215897220754, 00:19:14.940 "min_latency_us": 6241.523809523809, 00:19:14.940 "max_latency_us": 23218.46857142857 00:19:14.940 } 00:19:14.940 ], 00:19:14.940 "core_count": 1 00:19:14.940 } 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:14.940 nvmf_trace.0 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1457006 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1457006 ']' 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1457006 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457006 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457006' 00:19:14.940 killing process with pid 1457006 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1457006 00:19:14.940 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.940 00:19:14.940 Latency(us) 00:19:14.940 [2024-12-09T14:12:16.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.940 [2024-12-09T14:12:16.735Z] =================================================================================================================== 00:19:14.940 [2024-12-09T14:12:16.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.940 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1457006 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.199 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.199 rmmod nvme_tcp 00:19:15.199 rmmod nvme_fabrics 00:19:15.199 rmmod nvme_keyring 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1456756 ']' 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1456756 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1456756 ']' 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1456756 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456756 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456756' 00:19:15.200 killing process with pid 1456756 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1456756 00:19:15.200 15:12:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1456756 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.459 15:12:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.364 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:17.364 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2fe 00:19:17.364 00:19:17.364 real 0m21.646s 00:19:17.364 user 0m23.427s 00:19:17.364 sys 0m9.677s 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.623 ************************************ 00:19:17.623 END TEST nvmf_fips 00:19:17.623 ************************************ 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.623 ************************************ 00:19:17.623 START TEST nvmf_control_msg_list 00:19:17.623 ************************************ 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:17.623 * Looking for test storage... 00:19:17.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.623 --rc genhtml_branch_coverage=1 00:19:17.623 --rc genhtml_function_coverage=1 00:19:17.623 --rc genhtml_legend=1 00:19:17.623 --rc geninfo_all_blocks=1 00:19:17.623 --rc geninfo_unexecuted_blocks=1 00:19:17.623 00:19:17.623 ' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.623 --rc genhtml_branch_coverage=1 00:19:17.623 --rc genhtml_function_coverage=1 00:19:17.623 --rc genhtml_legend=1 00:19:17.623 --rc geninfo_all_blocks=1 00:19:17.623 --rc geninfo_unexecuted_blocks=1 00:19:17.623 00:19:17.623 ' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.623 --rc genhtml_branch_coverage=1 00:19:17.623 --rc genhtml_function_coverage=1 00:19:17.623 --rc genhtml_legend=1 00:19:17.623 --rc geninfo_all_blocks=1 00:19:17.623 --rc geninfo_unexecuted_blocks=1 00:19:17.623 00:19:17.623 ' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.623 --rc genhtml_branch_coverage=1 00:19:17.623 --rc genhtml_function_coverage=1 00:19:17.623 --rc genhtml_legend=1 00:19:17.623 --rc geninfo_all_blocks=1 00:19:17.623 --rc geninfo_unexecuted_blocks=1 00:19:17.623 00:19:17.623 ' 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.623 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:17.624 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.624 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.624 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.883 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.884 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.884 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.884 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.884 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.884 15:12:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:23.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:23.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.292 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:23.293 Found net devices under 0000:af:00.0: cvl_0_0 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:23.293 Found net devices under 0000:af:00.1: cvl_0_1 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.293 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:23.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:19:23.553 00:19:23.553 --- 10.0.0.2 ping statistics --- 00:19:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.553 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:23.553 00:19:23.553 --- 10.0.0.1 ping statistics --- 00:19:23.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.553 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1462704 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1462704 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1462704 ']' 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.553 15:12:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.811 [2024-12-09 15:12:25.391752] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:23.811 [2024-12-09 15:12:25.391799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.811 [2024-12-09 15:12:25.469967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.811 [2024-12-09 15:12:25.507512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.811 [2024-12-09 15:12:25.507550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.811 [2024-12-09 15:12:25.507558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.811 [2024-12-09 15:12:25.507564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.811 [2024-12-09 15:12:25.507569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.811 [2024-12-09 15:12:25.508088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:24.745 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 [2024-12-09 15:12:26.265422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 Malloc0 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 [2024-12-09 15:12:26.305871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1462944 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1462945 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1462946 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1462944 00:19:24.746 15:12:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.746 [2024-12-09 15:12:26.374260] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.746 [2024-12-09 15:12:26.384156] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.746 [2024-12-09 15:12:26.394177] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:25.681 Initializing NVMe Controllers 00:19:25.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:25.681 Initialization complete. Launching workers. 00:19:25.681 ======================================================== 00:19:25.681 Latency(us) 00:19:25.681 Device Information : IOPS MiB/s Average min max 00:19:25.681 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40902.96 40790.93 41063.19 00:19:25.681 ======================================================== 00:19:25.681 Total : 25.00 0.10 40902.96 40790.93 41063.19 00:19:25.681 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1462945 00:19:25.940 Initializing NVMe Controllers 00:19:25.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:25.940 Initialization complete. Launching workers. 00:19:25.940 ======================================================== 00:19:25.940 Latency(us) 00:19:25.940 Device Information : IOPS MiB/s Average min max 00:19:25.940 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40903.05 40804.74 41033.38 00:19:25.940 ======================================================== 00:19:25.940 Total : 25.00 0.10 40903.05 40804.74 41033.38 00:19:25.940 00:19:25.940 Initializing NVMe Controllers 00:19:25.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:25.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:25.940 Initialization complete. Launching workers. 00:19:25.940 ======================================================== 00:19:25.940 Latency(us) 00:19:25.940 Device Information : IOPS MiB/s Average min max 00:19:25.940 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40891.30 40670.76 40966.56 00:19:25.940 ======================================================== 00:19:25.940 Total : 25.00 0.10 40891.30 40670.76 40966.56 00:19:25.940 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1462946 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.940 rmmod nvme_tcp 00:19:25.940 rmmod nvme_fabrics 00:19:25.940 rmmod nvme_keyring 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1462704 ']' 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1462704 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1462704 ']' 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1462704 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.940 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1462704 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1462704' 00:19:26.200 killing process with pid 1462704 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1462704 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1462704 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.200 15:12:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.737 15:12:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.737 00:19:28.737 real 0m10.764s 00:19:28.737 user 0m7.708s 00:19:28.737 sys 0m5.263s 00:19:28.737 15:12:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.738 15:12:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.738 ************************************ 00:19:28.738 END TEST nvmf_control_msg_list 00:19:28.738 ************************************ 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.738 ************************************ 00:19:28.738 START TEST nvmf_wait_for_buf 00:19:28.738 ************************************ 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:28.738 * Looking for test storage... 00:19:28.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:28.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.738 --rc genhtml_branch_coverage=1 00:19:28.738 --rc genhtml_function_coverage=1 00:19:28.738 --rc genhtml_legend=1 00:19:28.738 --rc geninfo_all_blocks=1 00:19:28.738 --rc geninfo_unexecuted_blocks=1 00:19:28.738 00:19:28.738 ' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:28.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.738 --rc genhtml_branch_coverage=1 00:19:28.738 --rc genhtml_function_coverage=1 00:19:28.738 --rc genhtml_legend=1 00:19:28.738 --rc geninfo_all_blocks=1 00:19:28.738 --rc geninfo_unexecuted_blocks=1 00:19:28.738 00:19:28.738 ' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:28.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.738 --rc genhtml_branch_coverage=1 00:19:28.738 --rc genhtml_function_coverage=1 00:19:28.738 --rc genhtml_legend=1 00:19:28.738 --rc geninfo_all_blocks=1 00:19:28.738 --rc geninfo_unexecuted_blocks=1 00:19:28.738 00:19:28.738 ' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:28.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.738 --rc genhtml_branch_coverage=1 00:19:28.738 --rc genhtml_function_coverage=1 00:19:28.738 --rc genhtml_legend=1 00:19:28.738 --rc geninfo_all_blocks=1 00:19:28.738 --rc geninfo_unexecuted_blocks=1 00:19:28.738 00:19:28.738 ' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.738 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.739 15:12:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:35.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:35.313 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:35.313 Found net devices under 0000:af:00.0: cvl_0_0 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:35.313 Found net devices under 0000:af:00.1: cvl_0_1 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.313 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.314 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:35.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:19:35.314 00:19:35.314 --- 10.0.0.2 ping statistics --- 00:19:35.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.314 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:35.314 00:19:35.314 --- 10.0.0.1 ping statistics --- 00:19:35.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.314 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1466672 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1466672 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1466672 ']' 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 [2024-12-09 15:12:36.263597] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:35.314 [2024-12-09 15:12:36.263642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.314 [2024-12-09 15:12:36.342039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.314 [2024-12-09 15:12:36.381240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.314 [2024-12-09 15:12:36.381275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.314 [2024-12-09 15:12:36.381282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.314 [2024-12-09 15:12:36.381288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.314 [2024-12-09 15:12:36.381293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.314 [2024-12-09 15:12:36.381817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 Malloc0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 [2024-12-09 15:12:36.547399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 [2024-12-09 15:12:36.575579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.314 15:12:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:35.314 [2024-12-09 15:12:36.661290] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:36.693 Initializing NVMe Controllers 00:19:36.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:36.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:36.693 Initialization complete. Launching workers. 00:19:36.693 ======================================================== 00:19:36.693 Latency(us) 00:19:36.693 Device Information : IOPS MiB/s Average min max 00:19:36.693 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.00 15.38 33909.24 7292.50 71840.11 00:19:36.693 ======================================================== 00:19:36.693 Total : 123.00 15.38 33909.24 7292.50 71840.11 00:19:36.693 00:19:36.693 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:36.693 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1942 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1942 -eq 0 ]] 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.694 rmmod nvme_tcp 00:19:36.694 rmmod nvme_fabrics 00:19:36.694 rmmod nvme_keyring 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1466672 ']' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1466672 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1466672 ']' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1466672 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1466672 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1466672' 00:19:36.694 killing process with pid 1466672 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1466672 00:19:36.694 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1466672 00:19:36.953 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:36.953 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:36.953 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:36.953 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.954 15:12:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:38.860 00:19:38.860 real 0m10.511s 00:19:38.860 user 0m4.097s 00:19:38.860 sys 0m4.792s 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.860 ************************************ 00:19:38.860 END TEST nvmf_wait_for_buf 00:19:38.860 ************************************ 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.860 15:12:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.437 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:45.438 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:45.438 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:45.438 Found net devices under 0000:af:00.0: cvl_0_0 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:45.438 Found net devices under 0000:af:00.1: cvl_0_1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.438 ************************************ 00:19:45.438 START TEST nvmf_perf_adq 00:19:45.438 ************************************ 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.438 * Looking for test storage... 00:19:45.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.438 --rc genhtml_branch_coverage=1 00:19:45.438 --rc genhtml_function_coverage=1 00:19:45.438 --rc genhtml_legend=1 00:19:45.438 --rc geninfo_all_blocks=1 00:19:45.438 --rc geninfo_unexecuted_blocks=1 00:19:45.438 00:19:45.438 ' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.438 --rc genhtml_branch_coverage=1 00:19:45.438 --rc genhtml_function_coverage=1 00:19:45.438 --rc genhtml_legend=1 00:19:45.438 --rc geninfo_all_blocks=1 00:19:45.438 --rc geninfo_unexecuted_blocks=1 00:19:45.438 00:19:45.438 ' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.438 --rc genhtml_branch_coverage=1 00:19:45.438 --rc genhtml_function_coverage=1 00:19:45.438 --rc genhtml_legend=1 00:19:45.438 --rc geninfo_all_blocks=1 00:19:45.438 --rc geninfo_unexecuted_blocks=1 00:19:45.438 00:19:45.438 ' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.438 --rc genhtml_branch_coverage=1 00:19:45.438 --rc genhtml_function_coverage=1 00:19:45.438 --rc genhtml_legend=1 00:19:45.438 --rc geninfo_all_blocks=1 00:19:45.438 --rc geninfo_unexecuted_blocks=1 00:19:45.438 00:19:45.438 ' 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.438 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.439 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:50.716 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:50.716 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:50.716 Found net devices under 0000:af:00.0: cvl_0_0 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:50.716 Found net devices under 0000:af:00.1: cvl_0_1 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:50.716 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:51.654 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:54.189 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.465 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:59.466 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:59.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:59.466 Found net devices under 0000:af:00.0: cvl_0_0 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:59.466 Found net devices under 0000:af:00.1: cvl_0_1 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.466 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.971 ms 00:19:59.466 00:19:59.466 --- 10.0.0.2 ping statistics --- 00:19:59.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.466 rtt min/avg/max/mdev = 0.971/0.971/0.971/0.000 ms 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:59.466 00:19:59.466 --- 10.0.0.1 ping statistics --- 00:19:59.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.466 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:59.466 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1475042 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1475042 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1475042 ']' 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.467 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.732 [2024-12-09 15:13:01.280739] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:19:59.732 [2024-12-09 15:13:01.280787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.732 [2024-12-09 15:13:01.359404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.732 [2024-12-09 15:13:01.400998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.732 [2024-12-09 15:13:01.401038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.732 [2024-12-09 15:13:01.401045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.732 [2024-12-09 15:13:01.401052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.732 [2024-12-09 15:13:01.401057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.732 [2024-12-09 15:13:01.402630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.732 [2024-12-09 15:13:01.402734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.732 [2024-12-09 15:13:01.402840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.732 [2024-12-09 15:13:01.402840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.732 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 [2024-12-09 15:13:01.616617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 Malloc1 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.991 [2024-12-09 15:13:01.681969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1475250 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:59.991 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:02.527 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:02.527 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.527 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.527 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.527 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:02.527 "tick_rate": 2100000000, 00:20:02.527 "poll_groups": [ 00:20:02.527 { 00:20:02.527 "name": "nvmf_tgt_poll_group_000", 00:20:02.527 "admin_qpairs": 1, 00:20:02.527 "io_qpairs": 1, 00:20:02.527 "current_admin_qpairs": 1, 00:20:02.527 "current_io_qpairs": 1, 00:20:02.527 "pending_bdev_io": 0, 00:20:02.527 "completed_nvme_io": 20524, 00:20:02.527 "transports": [ 00:20:02.527 { 00:20:02.527 "trtype": "TCP" 00:20:02.527 } 00:20:02.527 ] 00:20:02.527 }, 00:20:02.527 { 00:20:02.527 "name": "nvmf_tgt_poll_group_001", 00:20:02.527 "admin_qpairs": 0, 00:20:02.527 "io_qpairs": 1, 00:20:02.527 "current_admin_qpairs": 0, 00:20:02.527 "current_io_qpairs": 1, 00:20:02.527 "pending_bdev_io": 0, 00:20:02.527 "completed_nvme_io": 20574, 00:20:02.527 "transports": [ 00:20:02.527 { 00:20:02.527 "trtype": "TCP" 00:20:02.527 } 00:20:02.527 ] 00:20:02.527 }, 00:20:02.527 { 00:20:02.527 "name": "nvmf_tgt_poll_group_002", 00:20:02.527 "admin_qpairs": 0, 00:20:02.527 "io_qpairs": 1, 00:20:02.527 "current_admin_qpairs": 0, 00:20:02.527 "current_io_qpairs": 1, 00:20:02.527 "pending_bdev_io": 0, 00:20:02.527 "completed_nvme_io": 20232, 00:20:02.527 "transports": [ 00:20:02.527 { 00:20:02.527 "trtype": "TCP" 00:20:02.527 } 00:20:02.527 ] 00:20:02.528 }, 00:20:02.528 { 00:20:02.528 "name": "nvmf_tgt_poll_group_003", 00:20:02.528 "admin_qpairs": 0, 00:20:02.528 "io_qpairs": 1, 00:20:02.528 "current_admin_qpairs": 0, 00:20:02.528 "current_io_qpairs": 1, 00:20:02.528 "pending_bdev_io": 0, 00:20:02.528 "completed_nvme_io": 20465, 00:20:02.528 "transports": [ 00:20:02.528 { 00:20:02.528 "trtype": "TCP" 00:20:02.528 } 00:20:02.528 ] 00:20:02.528 } 00:20:02.528 ] 00:20:02.528 }' 00:20:02.528 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:02.528 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:02.528 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:02.528 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:02.528 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1475250 00:20:10.647 Initializing NVMe Controllers 00:20:10.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:10.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:10.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:10.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:10.647 Initialization complete. Launching workers. 00:20:10.647 ======================================================== 00:20:10.647 Latency(us) 00:20:10.647 Device Information : IOPS MiB/s Average min max 00:20:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10791.50 42.15 5931.91 2028.15 10473.70 00:20:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10968.50 42.85 5834.00 2005.09 11086.37 00:20:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10809.60 42.22 5921.26 2341.80 11162.18 00:20:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10923.30 42.67 5859.25 2216.66 10840.14 00:20:10.648 ======================================================== 00:20:10.648 Total : 43492.90 169.89 5886.32 2005.09 11162.18 00:20:10.648 00:20:10.648 [2024-12-09 15:13:11.838761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x260f6c0 is same with the state(6) to be set 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.648 rmmod nvme_tcp 00:20:10.648 rmmod nvme_fabrics 00:20:10.648 rmmod nvme_keyring 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1475042 ']' 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1475042 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1475042 ']' 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1475042 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475042 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475042' 00:20:10.648 killing process with pid 1475042 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1475042 00:20:10.648 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1475042 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.648 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.555 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.555 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:12.555 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:12.555 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:13.941 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:16.475 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.749 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:21.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:21.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:21.750 Found net devices under 0000:af:00.0: cvl_0_0 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:21.750 Found net devices under 0000:af:00.1: cvl_0_1 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.750 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.966 ms 00:20:21.750 00:20:21.750 --- 10.0.0.2 ping statistics --- 00:20:21.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.750 rtt min/avg/max/mdev = 0.966/0.966/0.966/0.000 ms 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:21.750 00:20:21.750 --- 10.0.0.1 ping statistics --- 00:20:21.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.750 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:21.750 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:21.751 net.core.busy_poll = 1 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:21.751 net.core.busy_read = 1 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1479143 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1479143 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1479143 ']' 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.751 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.751 [2024-12-09 15:13:23.514281] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:21.751 [2024-12-09 15:13:23.514326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.010 [2024-12-09 15:13:23.593257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.010 [2024-12-09 15:13:23.631722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.010 [2024-12-09 15:13:23.631760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.010 [2024-12-09 15:13:23.631767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.010 [2024-12-09 15:13:23.631772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.010 [2024-12-09 15:13:23.631777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.010 [2024-12-09 15:13:23.633467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.010 [2024-12-09 15:13:23.633577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.010 [2024-12-09 15:13:23.633681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.010 [2024-12-09 15:13:23.633682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.578 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.578 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:22.578 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.578 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.578 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.836 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.836 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:22.836 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:22.836 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:22.836 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 [2024-12-09 15:13:24.525146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 Malloc1 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.837 [2024-12-09 15:13:24.587865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1479395 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:22.837 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:25.373 "tick_rate": 2100000000, 00:20:25.373 "poll_groups": [ 00:20:25.373 { 00:20:25.373 "name": "nvmf_tgt_poll_group_000", 00:20:25.373 "admin_qpairs": 1, 00:20:25.373 "io_qpairs": 1, 00:20:25.373 "current_admin_qpairs": 1, 00:20:25.373 "current_io_qpairs": 1, 00:20:25.373 "pending_bdev_io": 0, 00:20:25.373 "completed_nvme_io": 28499, 00:20:25.373 "transports": [ 00:20:25.373 { 00:20:25.373 "trtype": "TCP" 00:20:25.373 } 00:20:25.373 ] 00:20:25.373 }, 00:20:25.373 { 00:20:25.373 "name": "nvmf_tgt_poll_group_001", 00:20:25.373 "admin_qpairs": 0, 00:20:25.373 "io_qpairs": 3, 00:20:25.373 "current_admin_qpairs": 0, 00:20:25.373 "current_io_qpairs": 3, 00:20:25.373 "pending_bdev_io": 0, 00:20:25.373 "completed_nvme_io": 29397, 00:20:25.373 "transports": [ 00:20:25.373 { 00:20:25.373 "trtype": "TCP" 00:20:25.373 } 00:20:25.373 ] 00:20:25.373 }, 00:20:25.373 { 00:20:25.373 "name": "nvmf_tgt_poll_group_002", 00:20:25.373 "admin_qpairs": 0, 00:20:25.373 "io_qpairs": 0, 00:20:25.373 "current_admin_qpairs": 0, 00:20:25.373 "current_io_qpairs": 0, 00:20:25.373 "pending_bdev_io": 0, 00:20:25.373 "completed_nvme_io": 0, 00:20:25.373 "transports": [ 00:20:25.373 { 00:20:25.373 "trtype": "TCP" 00:20:25.373 } 00:20:25.373 ] 00:20:25.373 }, 00:20:25.373 { 00:20:25.373 "name": "nvmf_tgt_poll_group_003", 00:20:25.373 "admin_qpairs": 0, 00:20:25.373 "io_qpairs": 0, 00:20:25.373 "current_admin_qpairs": 0, 00:20:25.373 "current_io_qpairs": 0, 00:20:25.373 "pending_bdev_io": 0, 00:20:25.373 "completed_nvme_io": 0, 00:20:25.373 "transports": [ 00:20:25.373 { 00:20:25.373 "trtype": "TCP" 00:20:25.373 } 00:20:25.373 ] 00:20:25.373 } 00:20:25.373 ] 00:20:25.373 }' 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:25.373 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1479395 00:20:33.635 Initializing NVMe Controllers 00:20:33.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.635 Initialization complete. Launching workers. 00:20:33.635 ======================================================== 00:20:33.635 Latency(us) 00:20:33.635 Device Information : IOPS MiB/s Average min max 00:20:33.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6031.60 23.56 10612.96 1551.60 58594.36 00:20:33.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4549.70 17.77 14070.04 1907.03 58107.85 00:20:33.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4878.50 19.06 13160.04 1878.93 59164.18 00:20:33.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15494.50 60.53 4130.04 1544.00 6781.76 00:20:33.635 ======================================================== 00:20:33.635 Total : 30954.29 120.92 8277.42 1544.00 59164.18 00:20:33.635 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.635 rmmod nvme_tcp 00:20:33.635 rmmod nvme_fabrics 00:20:33.635 rmmod nvme_keyring 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1479143 ']' 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1479143 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1479143 ']' 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1479143 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479143 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479143' 00:20:33.635 killing process with pid 1479143 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1479143 00:20:33.635 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1479143 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.635 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.540 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.540 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:35.540 00:20:35.540 real 0m50.938s 00:20:35.540 user 2m46.953s 00:20:35.540 sys 0m10.301s 00:20:35.540 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.540 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.540 ************************************ 00:20:35.540 END TEST nvmf_perf_adq 00:20:35.540 ************************************ 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.541 ************************************ 00:20:35.541 START TEST nvmf_shutdown 00:20:35.541 ************************************ 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:35.541 * Looking for test storage... 00:20:35.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:35.541 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:35.800 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:35.800 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:35.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.801 --rc genhtml_branch_coverage=1 00:20:35.801 --rc genhtml_function_coverage=1 00:20:35.801 --rc genhtml_legend=1 00:20:35.801 --rc geninfo_all_blocks=1 00:20:35.801 --rc geninfo_unexecuted_blocks=1 00:20:35.801 00:20:35.801 ' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:35.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.801 --rc genhtml_branch_coverage=1 00:20:35.801 --rc genhtml_function_coverage=1 00:20:35.801 --rc genhtml_legend=1 00:20:35.801 --rc geninfo_all_blocks=1 00:20:35.801 --rc geninfo_unexecuted_blocks=1 00:20:35.801 00:20:35.801 ' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:35.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.801 --rc genhtml_branch_coverage=1 00:20:35.801 --rc genhtml_function_coverage=1 00:20:35.801 --rc genhtml_legend=1 00:20:35.801 --rc geninfo_all_blocks=1 00:20:35.801 --rc geninfo_unexecuted_blocks=1 00:20:35.801 00:20:35.801 ' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:35.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.801 --rc genhtml_branch_coverage=1 00:20:35.801 --rc genhtml_function_coverage=1 00:20:35.801 --rc genhtml_legend=1 00:20:35.801 --rc geninfo_all_blocks=1 00:20:35.801 --rc geninfo_unexecuted_blocks=1 00:20:35.801 00:20:35.801 ' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.801 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:35.802 ************************************ 00:20:35.802 START TEST nvmf_shutdown_tc1 00:20:35.802 ************************************ 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.802 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.372 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:42.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:42.373 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:42.373 Found net devices under 0000:af:00.0: cvl_0_0 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:42.373 Found net devices under 0000:af:00.1: cvl_0_1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:20:42.373 00:20:42.373 --- 10.0.0.2 ping statistics --- 00:20:42.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.373 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:42.373 00:20:42.373 --- 10.0.0.1 ping statistics --- 00:20:42.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.373 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1484570 00:20:42.373 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1484570 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1484570 ']' 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 [2024-12-09 15:13:43.583438] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:42.374 [2024-12-09 15:13:43.583489] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.374 [2024-12-09 15:13:43.665089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.374 [2024-12-09 15:13:43.706120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.374 [2024-12-09 15:13:43.706158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.374 [2024-12-09 15:13:43.706165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.374 [2024-12-09 15:13:43.706171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.374 [2024-12-09 15:13:43.706176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.374 [2024-12-09 15:13:43.707549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.374 [2024-12-09 15:13:43.707660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.374 [2024-12-09 15:13:43.707762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.374 [2024-12-09 15:13:43.707764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 [2024-12-09 15:13:43.851789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.374 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 Malloc1 00:20:42.374 [2024-12-09 15:13:43.963523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.374 Malloc2 00:20:42.374 Malloc3 00:20:42.374 Malloc4 00:20:42.374 Malloc5 00:20:42.374 Malloc6 00:20:42.633 Malloc7 00:20:42.634 Malloc8 00:20:42.634 Malloc9 00:20:42.634 Malloc10 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1484833 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1484833 /var/tmp/bdevperf.sock 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1484833 ']' 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.634 { 00:20:42.634 "params": { 00:20:42.634 "name": "Nvme$subsystem", 00:20:42.634 "trtype": "$TEST_TRANSPORT", 00:20:42.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.634 "adrfam": "ipv4", 00:20:42.634 "trsvcid": "$NVMF_PORT", 00:20:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.634 "hdgst": ${hdgst:-false}, 00:20:42.634 "ddgst": ${ddgst:-false} 00:20:42.634 }, 00:20:42.634 "method": "bdev_nvme_attach_controller" 00:20:42.634 } 00:20:42.634 EOF 00:20:42.634 )") 00:20:42.634 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.893 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.894 { 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme$subsystem", 00:20:42.894 "trtype": "$TEST_TRANSPORT", 00:20:42.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "$NVMF_PORT", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.894 "hdgst": ${hdgst:-false}, 00:20:42.894 "ddgst": ${ddgst:-false} 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 } 00:20:42.894 EOF 00:20:42.894 )") 00:20:42.894 [2024-12-09 15:13:44.432485] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:42.894 [2024-12-09 15:13:44.432533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.894 { 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme$subsystem", 00:20:42.894 "trtype": "$TEST_TRANSPORT", 00:20:42.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "$NVMF_PORT", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.894 "hdgst": ${hdgst:-false}, 00:20:42.894 "ddgst": ${ddgst:-false} 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 } 00:20:42.894 EOF 00:20:42.894 )") 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.894 { 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme$subsystem", 00:20:42.894 "trtype": "$TEST_TRANSPORT", 00:20:42.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "$NVMF_PORT", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.894 "hdgst": ${hdgst:-false}, 00:20:42.894 "ddgst": ${ddgst:-false} 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 } 00:20:42.894 EOF 00:20:42.894 )") 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.894 { 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme$subsystem", 00:20:42.894 "trtype": "$TEST_TRANSPORT", 00:20:42.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "$NVMF_PORT", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.894 "hdgst": ${hdgst:-false}, 00:20:42.894 "ddgst": ${ddgst:-false} 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 } 00:20:42.894 EOF 00:20:42.894 )") 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:42.894 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme1", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme2", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme3", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme4", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme5", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme6", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme7", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme8", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme9", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 },{ 00:20:42.894 "params": { 00:20:42.894 "name": "Nvme10", 00:20:42.894 "trtype": "tcp", 00:20:42.894 "traddr": "10.0.0.2", 00:20:42.894 "adrfam": "ipv4", 00:20:42.894 "trsvcid": "4420", 00:20:42.894 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:42.894 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:42.894 "hdgst": false, 00:20:42.894 "ddgst": false 00:20:42.894 }, 00:20:42.894 "method": "bdev_nvme_attach_controller" 00:20:42.894 }' 00:20:42.894 [2024-12-09 15:13:44.507290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.894 [2024-12-09 15:13:44.546981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1484833 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:44.797 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:45.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1484833 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1484570 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.734 "method": "bdev_nvme_attach_controller" 00:20:45.734 } 00:20:45.734 EOF 00:20:45.734 )") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.734 "method": "bdev_nvme_attach_controller" 00:20:45.734 } 00:20:45.734 EOF 00:20:45.734 )") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.734 "method": "bdev_nvme_attach_controller" 00:20:45.734 } 00:20:45.734 EOF 00:20:45.734 )") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.734 "method": "bdev_nvme_attach_controller" 00:20:45.734 } 00:20:45.734 EOF 00:20:45.734 )") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.734 "method": "bdev_nvme_attach_controller" 00:20:45.734 } 00:20:45.734 EOF 00:20:45.734 )") 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.734 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.734 { 00:20:45.734 "params": { 00:20:45.734 "name": "Nvme$subsystem", 00:20:45.734 "trtype": "$TEST_TRANSPORT", 00:20:45.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.734 "adrfam": "ipv4", 00:20:45.734 "trsvcid": "$NVMF_PORT", 00:20:45.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.734 "hdgst": ${hdgst:-false}, 00:20:45.734 "ddgst": ${ddgst:-false} 00:20:45.734 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 } 00:20:45.735 EOF 00:20:45.735 )") 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.735 { 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme$subsystem", 00:20:45.735 "trtype": "$TEST_TRANSPORT", 00:20:45.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "$NVMF_PORT", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.735 "hdgst": ${hdgst:-false}, 00:20:45.735 "ddgst": ${ddgst:-false} 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 } 00:20:45.735 EOF 00:20:45.735 )") 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.735 { 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme$subsystem", 00:20:45.735 "trtype": "$TEST_TRANSPORT", 00:20:45.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "$NVMF_PORT", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.735 "hdgst": ${hdgst:-false}, 00:20:45.735 "ddgst": ${ddgst:-false} 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 } 00:20:45.735 EOF 00:20:45.735 )") 00:20:45.735 [2024-12-09 15:13:47.361938] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:45.735 [2024-12-09 15:13:47.361987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485317 ] 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.735 { 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme$subsystem", 00:20:45.735 "trtype": "$TEST_TRANSPORT", 00:20:45.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "$NVMF_PORT", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.735 "hdgst": ${hdgst:-false}, 00:20:45.735 "ddgst": ${ddgst:-false} 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 } 00:20:45.735 EOF 00:20:45.735 )") 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:45.735 { 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme$subsystem", 00:20:45.735 "trtype": "$TEST_TRANSPORT", 00:20:45.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "$NVMF_PORT", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.735 "hdgst": ${hdgst:-false}, 00:20:45.735 "ddgst": ${ddgst:-false} 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 } 00:20:45.735 EOF 00:20:45.735 )") 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:45.735 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme1", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme2", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme3", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme4", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme5", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme6", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme7", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.735 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.735 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.735 "hdgst": false, 00:20:45.735 "ddgst": false 00:20:45.735 }, 00:20:45.735 "method": "bdev_nvme_attach_controller" 00:20:45.735 },{ 00:20:45.735 "params": { 00:20:45.735 "name": "Nvme8", 00:20:45.735 "trtype": "tcp", 00:20:45.735 "traddr": "10.0.0.2", 00:20:45.735 "adrfam": "ipv4", 00:20:45.735 "trsvcid": "4420", 00:20:45.736 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.736 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.736 "hdgst": false, 00:20:45.736 "ddgst": false 00:20:45.736 }, 00:20:45.736 "method": "bdev_nvme_attach_controller" 00:20:45.736 },{ 00:20:45.736 "params": { 00:20:45.736 "name": "Nvme9", 00:20:45.736 "trtype": "tcp", 00:20:45.736 "traddr": "10.0.0.2", 00:20:45.736 "adrfam": "ipv4", 00:20:45.736 "trsvcid": "4420", 00:20:45.736 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.736 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.736 "hdgst": false, 00:20:45.736 "ddgst": false 00:20:45.736 }, 00:20:45.736 "method": "bdev_nvme_attach_controller" 00:20:45.736 },{ 00:20:45.736 "params": { 00:20:45.736 "name": "Nvme10", 00:20:45.736 "trtype": "tcp", 00:20:45.736 "traddr": "10.0.0.2", 00:20:45.736 "adrfam": "ipv4", 00:20:45.736 "trsvcid": "4420", 00:20:45.736 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.736 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.736 "hdgst": false, 00:20:45.736 "ddgst": false 00:20:45.736 }, 00:20:45.736 "method": "bdev_nvme_attach_controller" 00:20:45.736 }' 00:20:45.736 [2024-12-09 15:13:47.439588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.736 [2024-12-09 15:13:47.479753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.114 Running I/O for 1 seconds... 00:20:48.051 2249.00 IOPS, 140.56 MiB/s 00:20:48.051 Latency(us) 00:20:48.051 [2024-12-09T14:13:49.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme1n1 : 1.07 239.38 14.96 0.00 0.00 264943.18 17226.61 221698.93 00:20:48.051 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme2n1 : 1.02 255.39 15.96 0.00 0.00 243812.66 3932.16 221698.93 00:20:48.051 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme3n1 : 1.10 295.53 18.47 0.00 0.00 207532.98 5929.45 205720.62 00:20:48.051 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme4n1 : 1.10 290.08 18.13 0.00 0.00 209309.70 13169.62 213709.78 00:20:48.051 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme5n1 : 1.13 283.46 17.72 0.00 0.00 211229.55 16227.96 203723.34 00:20:48.051 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme6n1 : 1.14 281.32 17.58 0.00 0.00 209977.05 13232.03 213709.78 00:20:48.051 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme7n1 : 1.12 285.02 17.81 0.00 0.00 203990.75 18225.25 213709.78 00:20:48.051 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme8n1 : 1.15 334.69 20.92 0.00 0.00 171349.66 7115.34 209715.20 00:20:48.051 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme9n1 : 1.14 279.65 17.48 0.00 0.00 202168.81 16103.13 222697.57 00:20:48.051 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.051 Verification LBA range: start 0x0 length 0x400 00:20:48.051 Nvme10n1 : 1.14 280.29 17.52 0.00 0.00 198530.19 18100.42 238675.87 00:20:48.051 [2024-12-09T14:13:49.846Z] =================================================================================================================== 00:20:48.051 [2024-12-09T14:13:49.846Z] Total : 2824.81 176.55 0.00 0.00 209782.34 3932.16 238675.87 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.311 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.311 rmmod nvme_tcp 00:20:48.311 rmmod nvme_fabrics 00:20:48.311 rmmod nvme_keyring 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1484570 ']' 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1484570 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1484570 ']' 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1484570 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.311 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1484570 00:20:48.571 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.571 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.571 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1484570' 00:20:48.571 killing process with pid 1484570 00:20:48.571 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1484570 00:20:48.571 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1484570 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.832 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.370 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.371 00:20:51.371 real 0m15.057s 00:20:51.371 user 0m32.680s 00:20:51.371 sys 0m5.847s 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 ************************************ 00:20:51.371 END TEST nvmf_shutdown_tc1 00:20:51.371 ************************************ 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 ************************************ 00:20:51.371 START TEST nvmf_shutdown_tc2 00:20:51.371 ************************************ 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:51.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:51.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:51.371 Found net devices under 0000:af:00.0: cvl_0_0 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.371 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:51.372 Found net devices under 0000:af:00.1: cvl_0_1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:20:51.372 00:20:51.372 --- 10.0.0.2 ping statistics --- 00:20:51.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.372 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:20:51.372 00:20:51.372 --- 10.0.0.1 ping statistics --- 00:20:51.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.372 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1486336 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1486336 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1486336 ']' 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.372 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.372 [2024-12-09 15:13:52.990625] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:51.372 [2024-12-09 15:13:52.990675] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.372 [2024-12-09 15:13:53.070511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.372 [2024-12-09 15:13:53.110357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.372 [2024-12-09 15:13:53.110396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.372 [2024-12-09 15:13:53.110404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.372 [2024-12-09 15:13:53.110410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.372 [2024-12-09 15:13:53.110415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.372 [2024-12-09 15:13:53.111821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.372 [2024-12-09 15:13:53.111930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.372 [2024-12-09 15:13:53.112013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.372 [2024-12-09 15:13:53.112014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.632 [2024-12-09 15:13:53.256402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:51.632 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.633 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.633 Malloc1 00:20:51.633 [2024-12-09 15:13:53.366243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.633 Malloc2 00:20:51.893 Malloc3 00:20:51.893 Malloc4 00:20:51.893 Malloc5 00:20:51.893 Malloc6 00:20:51.893 Malloc7 00:20:51.893 Malloc8 00:20:52.153 Malloc9 00:20:52.153 Malloc10 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1486401 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1486401 /var/tmp/bdevperf.sock 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1486401 ']' 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.153 { 00:20:52.153 "params": { 00:20:52.153 "name": "Nvme$subsystem", 00:20:52.153 "trtype": "$TEST_TRANSPORT", 00:20:52.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.153 "adrfam": "ipv4", 00:20:52.153 "trsvcid": "$NVMF_PORT", 00:20:52.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.153 "hdgst": ${hdgst:-false}, 00:20:52.153 "ddgst": ${ddgst:-false} 00:20:52.153 }, 00:20:52.153 "method": "bdev_nvme_attach_controller" 00:20:52.153 } 00:20:52.153 EOF 00:20:52.153 )") 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.153 { 00:20:52.153 "params": { 00:20:52.153 "name": "Nvme$subsystem", 00:20:52.153 "trtype": "$TEST_TRANSPORT", 00:20:52.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.153 "adrfam": "ipv4", 00:20:52.153 "trsvcid": "$NVMF_PORT", 00:20:52.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.153 "hdgst": ${hdgst:-false}, 00:20:52.153 "ddgst": ${ddgst:-false} 00:20:52.153 }, 00:20:52.153 "method": "bdev_nvme_attach_controller" 00:20:52.153 } 00:20:52.153 EOF 00:20:52.153 )") 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.153 { 00:20:52.153 "params": { 00:20:52.153 "name": "Nvme$subsystem", 00:20:52.153 "trtype": "$TEST_TRANSPORT", 00:20:52.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.153 "adrfam": "ipv4", 00:20:52.153 "trsvcid": "$NVMF_PORT", 00:20:52.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.153 "hdgst": ${hdgst:-false}, 00:20:52.153 "ddgst": ${ddgst:-false} 00:20:52.153 }, 00:20:52.153 "method": "bdev_nvme_attach_controller" 00:20:52.153 } 00:20:52.153 EOF 00:20:52.153 )") 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.153 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.153 { 00:20:52.153 "params": { 00:20:52.153 "name": "Nvme$subsystem", 00:20:52.153 "trtype": "$TEST_TRANSPORT", 00:20:52.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.153 "adrfam": "ipv4", 00:20:52.153 "trsvcid": "$NVMF_PORT", 00:20:52.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.153 "hdgst": ${hdgst:-false}, 00:20:52.153 "ddgst": ${ddgst:-false} 00:20:52.153 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 [2024-12-09 15:13:53.848755] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:52.154 [2024-12-09 15:13:53.848804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486401 ] 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.154 { 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme$subsystem", 00:20:52.154 "trtype": "$TEST_TRANSPORT", 00:20:52.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "$NVMF_PORT", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.154 "hdgst": ${hdgst:-false}, 00:20:52.154 "ddgst": ${ddgst:-false} 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 } 00:20:52.154 EOF 00:20:52.154 )") 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:52.154 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme1", 00:20:52.154 "trtype": "tcp", 00:20:52.154 "traddr": "10.0.0.2", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "4420", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.154 "hdgst": false, 00:20:52.154 "ddgst": false 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 },{ 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme2", 00:20:52.154 "trtype": "tcp", 00:20:52.154 "traddr": "10.0.0.2", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "4420", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:52.154 "hdgst": false, 00:20:52.154 "ddgst": false 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 },{ 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme3", 00:20:52.154 "trtype": "tcp", 00:20:52.154 "traddr": "10.0.0.2", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "4420", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:52.154 "hdgst": false, 00:20:52.154 "ddgst": false 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 },{ 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme4", 00:20:52.154 "trtype": "tcp", 00:20:52.154 "traddr": "10.0.0.2", 00:20:52.154 "adrfam": "ipv4", 00:20:52.154 "trsvcid": "4420", 00:20:52.154 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:52.154 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:52.154 "hdgst": false, 00:20:52.154 "ddgst": false 00:20:52.154 }, 00:20:52.154 "method": "bdev_nvme_attach_controller" 00:20:52.154 },{ 00:20:52.154 "params": { 00:20:52.154 "name": "Nvme5", 00:20:52.154 "trtype": "tcp", 00:20:52.154 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 },{ 00:20:52.155 "params": { 00:20:52.155 "name": "Nvme6", 00:20:52.155 "trtype": "tcp", 00:20:52.155 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 },{ 00:20:52.155 "params": { 00:20:52.155 "name": "Nvme7", 00:20:52.155 "trtype": "tcp", 00:20:52.155 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 },{ 00:20:52.155 "params": { 00:20:52.155 "name": "Nvme8", 00:20:52.155 "trtype": "tcp", 00:20:52.155 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 },{ 00:20:52.155 "params": { 00:20:52.155 "name": "Nvme9", 00:20:52.155 "trtype": "tcp", 00:20:52.155 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 },{ 00:20:52.155 "params": { 00:20:52.155 "name": "Nvme10", 00:20:52.155 "trtype": "tcp", 00:20:52.155 "traddr": "10.0.0.2", 00:20:52.155 "adrfam": "ipv4", 00:20:52.155 "trsvcid": "4420", 00:20:52.155 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:52.155 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:52.155 "hdgst": false, 00:20:52.155 "ddgst": false 00:20:52.155 }, 00:20:52.155 "method": "bdev_nvme_attach_controller" 00:20:52.155 }' 00:20:52.155 [2024-12-09 15:13:53.925802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.414 [2024-12-09 15:13:53.966550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.794 Running I/O for 10 seconds... 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:54.053 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.312 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=149 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 149 -ge 100 ']' 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1486401 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1486401 ']' 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1486401 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486401 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486401' 00:20:54.571 killing process with pid 1486401 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1486401 00:20:54.571 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1486401 00:20:54.571 Received shutdown signal, test time was about 0.843800 seconds 00:20:54.571 00:20:54.571 Latency(us) 00:20:54.571 [2024-12-09T14:13:56.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.571 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme1n1 : 0.83 307.13 19.20 0.00 0.00 205610.79 15541.39 209715.20 00:20:54.572 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme2n1 : 0.83 308.39 19.27 0.00 0.00 200899.78 16477.62 211712.49 00:20:54.572 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme3n1 : 0.83 309.60 19.35 0.00 0.00 196276.91 23343.30 201726.05 00:20:54.572 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme4n1 : 0.84 306.44 19.15 0.00 0.00 194529.65 15291.73 213709.78 00:20:54.572 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme5n1 : 0.84 304.08 19.00 0.00 0.00 192273.31 17101.78 214708.42 00:20:54.572 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme6n1 : 0.81 235.98 14.75 0.00 0.00 241834.18 15728.64 217704.35 00:20:54.572 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme7n1 : 0.84 303.62 18.98 0.00 0.00 184055.95 15728.64 198730.12 00:20:54.572 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme8n1 : 0.81 242.97 15.19 0.00 0.00 223245.17 3214.38 214708.42 00:20:54.572 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme9n1 : 0.82 233.36 14.58 0.00 0.00 228237.57 18350.08 219701.64 00:20:54.572 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:54.572 Verification LBA range: start 0x0 length 0x400 00:20:54.572 Nvme10n1 : 0.82 233.71 14.61 0.00 0.00 224022.92 16103.13 234681.30 00:20:54.572 [2024-12-09T14:13:56.367Z] =================================================================================================================== 00:20:54.572 [2024-12-09T14:13:56.367Z] Total : 2785.29 174.08 0.00 0.00 206885.64 3214.38 234681.30 00:20:54.831 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1486336 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.767 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.767 rmmod nvme_tcp 00:20:55.767 rmmod nvme_fabrics 00:20:55.767 rmmod nvme_keyring 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1486336 ']' 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1486336 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1486336 ']' 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1486336 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486336 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486336' 00:20:55.768 killing process with pid 1486336 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1486336 00:20:55.768 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1486336 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.336 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.241 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.241 00:20:58.241 real 0m7.375s 00:20:58.241 user 0m21.653s 00:20:58.241 sys 0m1.332s 00:20:58.241 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.241 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:58.241 ************************************ 00:20:58.241 END TEST nvmf_shutdown_tc2 00:20:58.241 ************************************ 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:58.501 ************************************ 00:20:58.501 START TEST nvmf_shutdown_tc3 00:20:58.501 ************************************ 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.501 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:58.502 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:58.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:58.502 Found net devices under 0000:af:00.0: cvl_0_0 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:58.502 Found net devices under 0000:af:00.1: cvl_0_1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:58.502 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:58.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:20:58.762 00:20:58.762 --- 10.0.0.2 ping statistics --- 00:20:58.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.762 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:58.762 00:20:58.762 --- 10.0.0.1 ping statistics --- 00:20:58.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.762 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1487644 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1487644 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1487644 ']' 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.762 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.762 [2024-12-09 15:14:00.454115] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:58.762 [2024-12-09 15:14:00.454167] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.762 [2024-12-09 15:14:00.531862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.021 [2024-12-09 15:14:00.574117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.021 [2024-12-09 15:14:00.574152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.021 [2024-12-09 15:14:00.574160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.021 [2024-12-09 15:14:00.574166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.021 [2024-12-09 15:14:00.574171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.021 [2024-12-09 15:14:00.575592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.021 [2024-12-09 15:14:00.575701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.021 [2024-12-09 15:14:00.575806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.021 [2024-12-09 15:14:00.575807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.021 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.021 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:59.021 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.021 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.022 [2024-12-09 15:14:00.713404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.022 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.022 Malloc1 00:20:59.281 [2024-12-09 15:14:00.817819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.281 Malloc2 00:20:59.281 Malloc3 00:20:59.281 Malloc4 00:20:59.281 Malloc5 00:20:59.281 Malloc6 00:20:59.281 Malloc7 00:20:59.540 Malloc8 00:20:59.540 Malloc9 00:20:59.540 Malloc10 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1487817 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1487817 /var/tmp/bdevperf.sock 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1487817 ']' 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.540 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 [2024-12-09 15:14:01.285289] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:20:59.541 [2024-12-09 15:14:01.285340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487817 ] 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.541 "adrfam": "ipv4", 00:20:59.541 "trsvcid": "$NVMF_PORT", 00:20:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.541 "hdgst": ${hdgst:-false}, 00:20:59.541 "ddgst": ${ddgst:-false} 00:20:59.541 }, 00:20:59.541 "method": "bdev_nvme_attach_controller" 00:20:59.541 } 00:20:59.541 EOF 00:20:59.541 )") 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.541 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.541 { 00:20:59.541 "params": { 00:20:59.541 "name": "Nvme$subsystem", 00:20:59.541 "trtype": "$TEST_TRANSPORT", 00:20:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "$NVMF_PORT", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.542 "hdgst": ${hdgst:-false}, 00:20:59.542 "ddgst": ${ddgst:-false} 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 } 00:20:59.542 EOF 00:20:59.542 )") 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.542 { 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme$subsystem", 00:20:59.542 "trtype": "$TEST_TRANSPORT", 00:20:59.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "$NVMF_PORT", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.542 "hdgst": ${hdgst:-false}, 00:20:59.542 "ddgst": ${ddgst:-false} 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 } 00:20:59.542 EOF 00:20:59.542 )") 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:59.542 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme1", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme2", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme3", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme4", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme5", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme6", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme7", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme8", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme9", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 },{ 00:20:59.542 "params": { 00:20:59.542 "name": "Nvme10", 00:20:59.542 "trtype": "tcp", 00:20:59.542 "traddr": "10.0.0.2", 00:20:59.542 "adrfam": "ipv4", 00:20:59.542 "trsvcid": "4420", 00:20:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.542 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.542 "hdgst": false, 00:20:59.542 "ddgst": false 00:20:59.542 }, 00:20:59.542 "method": "bdev_nvme_attach_controller" 00:20:59.542 }' 00:20:59.801 [2024-12-09 15:14:01.360059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.801 [2024-12-09 15:14:01.399811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.176 Running I/O for 10 seconds... 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=18 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 18 -ge 100 ']' 00:21:01.434 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.693 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1487644 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1487644 ']' 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1487644 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487644 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487644' 00:21:01.967 killing process with pid 1487644 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1487644 00:21:01.967 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1487644 00:21:01.967 [2024-12-09 15:14:03.580442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.967 [2024-12-09 15:14:03.580674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.580891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89cf50 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.968 [2024-12-09 15:14:03.582419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.582542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fb40 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.583995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.969 [2024-12-09 15:14:03.584253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d440 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.584296] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.970 [2024-12-09 15:14:03.585884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.585993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586128] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.970 [2024-12-09 15:14:03.586138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.586312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89d910 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.970 [2024-12-09 15:14:03.587410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587526] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.971 [2024-12-09 15:14:03.587540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.587768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89de00 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.588149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbca4c0 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.588261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.971 [2024-12-09 15:14:03.588311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.971 [2024-12-09 15:14:03.588318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76e1b0 is same with the state(6) to be set 00:21:01.971 [2024-12-09 15:14:03.588346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f870 is same with the state(6) to be set 00:21:01.972 [2024-12-09 15:14:03.588454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x764790 is same with the state(6) to be set 00:21:01.972 [2024-12-09 15:14:03.588535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.972 [2024-12-09 15:14:03.588586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.588594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770750 is same with the state(6) to be set 00:21:01.972 [2024-12-09 15:14:03.589316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.972 [2024-12-09 15:14:03.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.972 [2024-12-09 15:14:03.589600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.589990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.589998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.973 [2024-12-09 15:14:03.590147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.973 [2024-12-09 15:14:03.590153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.974 [2024-12-09 15:14:03.590318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.974 [2024-12-09 15:14:03.590589] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.974 [2024-12-09 15:14:03.591753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:01.974 [2024-12-09 15:14:03.591806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7702c0 (9): Bad file descriptor 00:21:01.974 [2024-12-09 15:14:03.593310] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.974 [2024-12-09 15:14:03.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.974 [2024-12-09 15:14:03.594001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7702c0 with addr=10.0.0.2, port=4420 00:21:01.974 [2024-12-09 15:14:03.594010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7702c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7702c0 (9): B[2024-12-09 15:14:03.594412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with tad file descriptor 00:21:01.974 he state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.974 [2024-12-09 15:14:03.594547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.594553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.594560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e7c0 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.594652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:01.975 [2024-12-09 15:14:03.594666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:01.975 [2024-12-09 15:14:03.594675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:01.975 [2024-12-09 15:14:03.594683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:01.975 [2024-12-09 15:14:03.595210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595611] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.975 [2024-12-09 15:14:03.595615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.595642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89ec90 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.596495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.975 [2024-12-09 15:14:03.596514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.975 [2024-12-09 15:14:03.596521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.596531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.975 [2024-12-09 15:14:03.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:14:03.596540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.975 he state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.596552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.975 [2024-12-09 15:14:03.596555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128[2024-12-09 15:14:03.596592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 he state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:14:03.596602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 he state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12[2024-12-09 15:14:03.596721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 he state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:14:03.596730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 he state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with t[2024-12-09 15:14:03.596775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12he state(6) to be set 00:21:01.976 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with t[2024-12-09 15:14:03.596789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:21:01.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:14:03.596825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 he state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.976 [2024-12-09 15:14:03.596872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.976 [2024-12-09 15:14:03.596878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.976 [2024-12-09 15:14:03.596880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with the state(6) to be set 00:21:01.977 [2024-12-09 15:14:03.596888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-12-09 15:14:03.596889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89f180 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 he state(6) to be set 00:21:01.977 [2024-12-09 15:14:03.596899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.596991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.596999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.597338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.597346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.607003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.607018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.977 [2024-12-09 15:14:03.607026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.977 [2024-12-09 15:14:03.607036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.978 [2024-12-09 15:14:03.607242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb780f0 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.607398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbddf70 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.607485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbca4c0 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.607504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76e1b0 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.607519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f870 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.607548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x685610 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.607628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4450 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.607713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.978 [2024-12-09 15:14:03.607768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.978 [2024-12-09 15:14:03.607775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94540 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.607789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x764790 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.607804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770750 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.609066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:01.978 [2024-12-09 15:14:03.609094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x685610 (9): Bad file descriptor 00:21:01.978 [2024-12-09 15:14:03.609204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:01.978 [2024-12-09 15:14:03.609587] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:01.978 [2024-12-09 15:14:03.609857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.978 [2024-12-09 15:14:03.609877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x685610 with addr=10.0.0.2, port=4420 00:21:01.978 [2024-12-09 15:14:03.609888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x685610 is same with the state(6) to be set 00:21:01.978 [2024-12-09 15:14:03.610057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.978 [2024-12-09 15:14:03.610072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7702c0 with addr=10.0.0.2, port=4420 00:21:01.978 [2024-12-09 15:14:03.610082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7702c0 is same with the state(6) to be set 00:21:01.979 [2024-12-09 15:14:03.610147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x685610 (9): Bad file descriptor 00:21:01.979 [2024-12-09 15:14:03.610162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7702c0 (9): Bad file descriptor 00:21:01.979 [2024-12-09 15:14:03.610225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:01.979 [2024-12-09 15:14:03.610238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:01.979 [2024-12-09 15:14:03.610248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:01.979 [2024-12-09 15:14:03.610258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:01.979 [2024-12-09 15:14:03.610268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:01.979 [2024-12-09 15:14:03.610277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:01.979 [2024-12-09 15:14:03.610286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:01.979 [2024-12-09 15:14:03.610294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:01.979 [2024-12-09 15:14:03.617427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbddf70 (9): Bad file descriptor 00:21:01.979 [2024-12-09 15:14:03.617491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4450 (9): Bad file descriptor 00:21:01.979 [2024-12-09 15:14:03.617514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94540 (9): Bad file descriptor 00:21:01.979 [2024-12-09 15:14:03.617640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.617992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.979 [2024-12-09 15:14:03.618285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.979 [2024-12-09 15:14:03.618294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.618838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.618845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9746c0 is same with the state(6) to be set 00:21:01.980 [2024-12-09 15:14:03.619833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.619846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.619872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.980 [2024-12-09 15:14:03.619880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.980 [2024-12-09 15:14:03.619889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.619990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.619999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.981 [2024-12-09 15:14:03.620415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.981 [2024-12-09 15:14:03.620423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.620827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.620834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975790 is same with the state(6) to be set 00:21:01.982 [2024-12-09 15:14:03.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.982 [2024-12-09 15:14:03.621934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.982 [2024-12-09 15:14:03.621941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.621949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.621956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.621964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.621971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.621982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.621989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.621998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.983 [2024-12-09 15:14:03.622509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.983 [2024-12-09 15:14:03.622516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.622802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.622809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb47a20 is same with the state(6) to be set 00:21:01.984 [2024-12-09 15:14:03.623807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.623989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.984 [2024-12-09 15:14:03.624075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.984 [2024-12-09 15:14:03.624083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.624092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.624099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.624107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.624113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.624122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.624129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.624137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.624146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.624154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.624160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.633986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.633994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.985 [2024-12-09 15:14:03.634215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.985 [2024-12-09 15:14:03.634227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.634601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb74a00 is same with the state(6) to be set 00:21:01.986 [2024-12-09 15:14:03.635635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.986 [2024-12-09 15:14:03.635984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.986 [2024-12-09 15:14:03.635992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.635999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.987 [2024-12-09 15:14:03.636559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.987 [2024-12-09 15:14:03.636565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.636657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.636665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b5370 is same with the state(6) to be set 00:21:01.988 [2024-12-09 15:14:03.637635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:01.988 [2024-12-09 15:14:03.637653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:01.988 [2024-12-09 15:14:03.637664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:01.988 [2024-12-09 15:14:03.637676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:01.988 [2024-12-09 15:14:03.637752] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:01.988 [2024-12-09 15:14:03.637838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:01.988 [2024-12-09 15:14:03.638145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.988 [2024-12-09 15:14:03.638161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x770750 with addr=10.0.0.2, port=4420 00:21:01.988 [2024-12-09 15:14:03.638169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770750 is same with the state(6) to be set 00:21:01.988 [2024-12-09 15:14:03.638391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.988 [2024-12-09 15:14:03.638403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x764790 with addr=10.0.0.2, port=4420 00:21:01.988 [2024-12-09 15:14:03.638414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x764790 is same with the state(6) to be set 00:21:01.988 [2024-12-09 15:14:03.638655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.988 [2024-12-09 15:14:03.638667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76e1b0 with addr=10.0.0.2, port=4420 00:21:01.988 [2024-12-09 15:14:03.638674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76e1b0 is same with the state(6) to be set 00:21:01.988 [2024-12-09 15:14:03.638846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.988 [2024-12-09 15:14:03.638860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76f870 with addr=10.0.0.2, port=4420 00:21:01.988 [2024-12-09 15:14:03.638869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f870 is same with the state(6) to be set 00:21:01.988 [2024-12-09 15:14:03.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-12-09 15:14:03.640521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.988 [2024-12-09 15:14:03.640535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.640987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.640996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-12-09 15:14:03.641288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.989 [2024-12-09 15:14:03.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.641314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.641326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.641335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.641346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.641355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.641375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.641387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.641396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.641406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76ea0 is same with the state(6) to be set 00:21:01.990 [2024-12-09 15:14:03.642710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.642985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.642995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-12-09 15:14:03.643365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.990 [2024-12-09 15:14:03.643376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.991 [2024-12-09 15:14:03.643656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.991 [2024-12-09 15:14:03.643670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf180 is same with the state(6) to be set 00:21:01.991 [2024-12-09 15:14:03.645147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:01.991 [2024-12-09 15:14:03.645170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:01.991 [2024-12-09 15:14:03.645185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:01.991 [2024-12-09 15:14:03.645202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:01.991 task offset: 21248 on job bdev=Nvme5n1 fails 00:21:01.991 00:21:01.991 Latency(us) 00:21:01.991 [2024-12-09T14:14:03.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme1n1 ended in about 0.65 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme1n1 : 0.65 204.08 12.75 98.20 0.00 208918.38 23218.47 197731.47 00:21:01.991 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme2n1 ended in about 0.65 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme2n1 : 0.65 195.81 12.24 97.91 0.00 209834.99 15291.73 197731.47 00:21:01.991 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme3n1 ended in about 0.66 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme3n1 : 0.66 195.22 12.20 97.61 0.00 205345.16 19099.06 209715.20 00:21:01.991 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme4n1 ended in about 0.67 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme4n1 : 0.67 191.78 11.99 95.89 0.00 204152.77 15291.73 189742.32 00:21:01.991 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme5n1 ended in about 0.62 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme5n1 : 0.62 205.27 12.83 102.64 0.00 184373.33 1973.88 216705.71 00:21:01.991 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme6n1 ended in about 0.67 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme6n1 : 0.67 189.83 11.86 94.91 0.00 196119.08 27088.21 211712.49 00:21:01.991 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme7n1 ended in about 0.64 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme7n1 : 0.64 199.77 12.49 99.88 0.00 179811.96 15853.47 195734.19 00:21:01.991 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme8n1 ended in about 0.68 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme8n1 : 0.68 217.31 13.58 66.52 0.00 181852.00 24966.10 198730.12 00:21:01.991 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme9n1 : 0.63 204.72 12.79 0.00 0.00 246543.12 18225.25 225693.50 00:21:01.991 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.991 Job: Nvme10n1 ended in about 0.67 seconds with error 00:21:01.991 Verification LBA range: start 0x0 length 0x400 00:21:01.991 Nvme10n1 : 0.67 95.60 5.97 95.60 0.00 260975.18 18974.23 245666.38 00:21:01.991 [2024-12-09T14:14:03.786Z] =================================================================================================================== 00:21:01.991 [2024-12-09T14:14:03.786Z] Total : 1899.38 118.71 849.16 0.00 204521.54 1973.88 245666.38 00:21:01.991 [2024-12-09 15:14:03.675130] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:01.991 [2024-12-09 15:14:03.675187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:01.991 [2024-12-09 15:14:03.675542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.991 [2024-12-09 15:14:03.675562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbca4c0 with addr=10.0.0.2, port=4420 00:21:01.991 [2024-12-09 15:14:03.675572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbca4c0 is same with the state(6) to be set 00:21:01.991 [2024-12-09 15:14:03.675588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770750 (9): Bad file descriptor 00:21:01.991 [2024-12-09 15:14:03.675600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x764790 (9): Bad file descriptor 00:21:01.991 [2024-12-09 15:14:03.675609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76e1b0 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.675618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f870 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.676016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.676034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7702c0 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.676043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7702c0 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.676243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.676256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x685610 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.676263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x685610 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.676343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.676354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc4450 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.676361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc4450 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.676488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.676499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94540 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.676506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94540 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.676719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.676730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbddf70 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.676737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbddf70 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.676747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbca4c0 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.676757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.676764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.676772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.676781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.676790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.676798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.676808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.676814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.676821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.676828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.676834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.676840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.676847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.676854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.676861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.676866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.676902] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:01.992 [2024-12-09 15:14:03.677441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7702c0 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.677457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x685610 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.677467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4450 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.677477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94540 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.677486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbddf70 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.677493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.677553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:01.992 [2024-12-09 15:14:03.677564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:01.992 [2024-12-09 15:14:03.677572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:01.992 [2024-12-09 15:14:03.677581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:01.992 [2024-12-09 15:14:03.677608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.677637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.677665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.677691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.677716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:01.992 [2024-12-09 15:14:03.677723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:01.992 [2024-12-09 15:14:03.677729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:01.992 [2024-12-09 15:14:03.677735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:01.992 [2024-12-09 15:14:03.678008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.678022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76f870 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.678029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f870 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.678178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.678188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76e1b0 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.678195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76e1b0 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.678287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.678298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x764790 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.678305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x764790 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.678520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.992 [2024-12-09 15:14:03.678532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x770750 with addr=10.0.0.2, port=4420 00:21:01.992 [2024-12-09 15:14:03.678539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x770750 is same with the state(6) to be set 00:21:01.992 [2024-12-09 15:14:03.678568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f870 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.678578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76e1b0 (9): Bad file descriptor 00:21:01.992 [2024-12-09 15:14:03.678591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x764790 (9): Bad file descriptor 00:21:01.993 [2024-12-09 15:14:03.678600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770750 (9): Bad file descriptor 00:21:01.993 [2024-12-09 15:14:03.678622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:01.993 [2024-12-09 15:14:03.678630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:01.993 [2024-12-09 15:14:03.678637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:01.993 [2024-12-09 15:14:03.678644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:01.993 [2024-12-09 15:14:03.678652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:01.993 [2024-12-09 15:14:03.678659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:01.993 [2024-12-09 15:14:03.678665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:01.993 [2024-12-09 15:14:03.678671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:01.993 [2024-12-09 15:14:03.678677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:01.993 [2024-12-09 15:14:03.678683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:01.993 [2024-12-09 15:14:03.678693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:01.993 [2024-12-09 15:14:03.678699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:01.993 [2024-12-09 15:14:03.678706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:01.993 [2024-12-09 15:14:03.678714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:01.993 [2024-12-09 15:14:03.678722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:01.993 [2024-12-09 15:14:03.678728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:02.252 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1487817 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1487817 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1487817 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:03.631 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.631 rmmod nvme_tcp 00:21:03.631 rmmod nvme_fabrics 00:21:03.631 rmmod nvme_keyring 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1487644 ']' 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1487644 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1487644 ']' 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1487644 00:21:03.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1487644) - No such process 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1487644 is not found' 00:21:03.631 Process with pid 1487644 is not found 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.631 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.536 00:21:05.536 real 0m7.083s 00:21:05.536 user 0m16.094s 00:21:05.536 sys 0m1.261s 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.536 ************************************ 00:21:05.536 END TEST nvmf_shutdown_tc3 00:21:05.536 ************************************ 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.536 ************************************ 00:21:05.536 START TEST nvmf_shutdown_tc4 00:21:05.536 ************************************ 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.536 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:05.537 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:05.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:05.537 Found net devices under 0000:af:00.0: cvl_0_0 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:05.537 Found net devices under 0000:af:00.1: cvl_0_1 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.537 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:21:05.798 00:21:05.798 --- 10.0.0.2 ping statistics --- 00:21:05.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.798 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:21:05.798 00:21:05.798 --- 10.0.0.1 ping statistics --- 00:21:05.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.798 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1488951 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1488951 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1488951 ']' 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.798 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.798 [2024-12-09 15:14:07.588975] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:05.799 [2024-12-09 15:14:07.589026] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.057 [2024-12-09 15:14:07.666055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.057 [2024-12-09 15:14:07.707211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.057 [2024-12-09 15:14:07.707251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.057 [2024-12-09 15:14:07.707258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.057 [2024-12-09 15:14:07.707264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.057 [2024-12-09 15:14:07.707269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.057 [2024-12-09 15:14:07.708711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.057 [2024-12-09 15:14:07.708820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.057 [2024-12-09 15:14:07.708924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.057 [2024-12-09 15:14:07.708925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.057 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.057 [2024-12-09 15:14:07.846041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.316 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.316 Malloc1 00:21:06.316 [2024-12-09 15:14:07.961621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.316 Malloc2 00:21:06.316 Malloc3 00:21:06.316 Malloc4 00:21:06.316 Malloc5 00:21:06.574 Malloc6 00:21:06.574 Malloc7 00:21:06.574 Malloc8 00:21:06.574 Malloc9 00:21:06.574 Malloc10 00:21:06.574 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.574 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:06.574 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.574 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:06.832 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1489151 00:21:06.832 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:06.832 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:06.832 [2024-12-09 15:14:08.468715] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1488951 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1488951 ']' 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1488951 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1488951 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1488951' 00:21:12.109 killing process with pid 1488951 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1488951 00:21:12.109 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1488951 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 [2024-12-09 15:14:13.473577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 [2024-12-09 15:14:13.474482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.109 starting I/O failed: -6 00:21:12.109 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 [2024-12-09 15:14:13.475513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 [2024-12-09 15:14:13.476935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.110 NVMe io qpair process completion error 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 [2024-12-09 15:14:13.477339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00eb0 is same with the state(6) to be set 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 [2024-12-09 15:14:13.477378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00eb0 is same with the state(6) to be set 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.110 starting I/O failed: -6 00:21:12.110 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 [2024-12-09 15:14:13.477981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 [2024-12-09 15:14:13.478758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 [2024-12-09 15:14:13.479762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 [2024-12-09 15:14:13.480105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 [2024-12-09 15:14:13.480185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff170 is same with the state(6) to be set 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.111 starting I/O failed: -6 00:21:12.111 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 [2024-12-09 15:14:13.481569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.112 NVMe io qpair process completion error 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 [2024-12-09 15:14:13.482562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 [2024-12-09 15:14:13.483450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.112 Write completed with error (sct=0, sc=8) 00:21:12.112 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 [2024-12-09 15:14:13.484441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 [2024-12-09 15:14:13.486471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.113 NVMe io qpair process completion error 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 starting I/O failed: -6 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.113 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 [2024-12-09 15:14:13.487370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 [2024-12-09 15:14:13.488260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 [2024-12-09 15:14:13.489325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.114 Write completed with error (sct=0, sc=8) 00:21:12.114 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 [2024-12-09 15:14:13.491211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.115 NVMe io qpair process completion error 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 [2024-12-09 15:14:13.492259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.115 starting I/O failed: -6 00:21:12.115 Write completed with error (sct=0, sc=8) 00:21:12.116 [2024-12-09 15:14:13.493042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 [2024-12-09 15:14:13.494064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 [2024-12-09 15:14:13.498613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.116 NVMe io qpair process completion error 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 Write completed with error (sct=0, sc=8) 00:21:12.116 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 [2024-12-09 15:14:13.499878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.117 starting I/O failed: -6 00:21:12.117 starting I/O failed: -6 00:21:12.117 starting I/O failed: -6 00:21:12.117 starting I/O failed: -6 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 [2024-12-09 15:14:13.501778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.117 Write completed with error (sct=0, sc=8) 00:21:12.117 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 [2024-12-09 15:14:13.503550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.118 NVMe io qpair process completion error 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 [2024-12-09 15:14:13.504524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 [2024-12-09 15:14:13.505319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.118 Write completed with error (sct=0, sc=8) 00:21:12.118 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 [2024-12-09 15:14:13.506322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 [2024-12-09 15:14:13.507953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.119 NVMe io qpair process completion error 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 starting I/O failed: -6 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.119 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 [2024-12-09 15:14:13.508936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 [2024-12-09 15:14:13.509827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 [2024-12-09 15:14:13.510860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.120 starting I/O failed: -6 00:21:12.120 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 [2024-12-09 15:14:13.515580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.121 NVMe io qpair process completion error 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.121 starting I/O failed: -6 00:21:12.121 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 [2024-12-09 15:14:13.522583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.122 starting I/O failed: -6 00:21:12.122 Write completed with error (sct=0, sc=8) 00:21:12.123 [2024-12-09 15:14:13.523471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 [2024-12-09 15:14:13.524511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.123 starting I/O failed: -6 00:21:12.123 Write completed with error (sct=0, sc=8) 00:21:12.124 starting I/O failed: -6 00:21:12.124 Write completed with error (sct=0, sc=8) 00:21:12.124 starting I/O failed: -6 00:21:12.124 Write completed with error (sct=0, sc=8) 00:21:12.124 starting I/O failed: -6 00:21:12.124 Write completed with error (sct=0, sc=8) 00:21:12.124 starting I/O failed: -6 00:21:12.124 Write completed with error (sct=0, sc=8) 00:21:12.124 starting I/O failed: -6 00:21:12.124 [2024-12-09 15:14:13.526976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:12.124 NVMe io qpair process completion error 00:21:12.124 Initializing NVMe Controllers 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:12.124 Controller IO queue size 128, less than required. 00:21:12.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:12.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:12.124 Initialization complete. Launching workers. 00:21:12.124 ======================================================== 00:21:12.124 Latency(us) 00:21:12.124 Device Information : IOPS MiB/s Average min max 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2214.60 95.16 57801.93 800.74 109968.18 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2203.88 94.70 58095.51 715.66 127386.99 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2176.20 93.51 58875.20 1165.81 104866.65 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2204.73 94.73 58156.48 551.02 102454.15 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2217.82 95.30 57159.71 750.54 125400.40 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2214.82 95.17 57247.29 763.93 98129.79 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2136.51 91.80 59357.22 703.92 97629.72 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2149.38 92.36 59018.45 892.05 97422.52 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2234.77 96.03 56779.01 735.72 96602.71 00:21:12.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2215.89 95.21 57311.91 551.50 108016.83 00:21:12.124 ======================================================== 00:21:12.124 Total : 21968.60 943.96 57969.33 551.02 127386.99 00:21:12.124 00:21:12.124 [2024-12-09 15:14:13.529984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3410 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3740 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4ae0 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f3a70 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4720 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2ef0 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f4900 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2890 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2560 is same with the state(6) to be set 00:21:12.124 [2024-12-09 15:14:13.530264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f2bc0 is same with the state(6) to be set 00:21:12.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:12.124 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1489151 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1489151 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1489151 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:13.061 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.320 rmmod nvme_tcp 00:21:13.320 rmmod nvme_fabrics 00:21:13.320 rmmod nvme_keyring 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1488951 ']' 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1488951 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1488951 ']' 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1488951 00:21:13.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1488951) - No such process 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1488951 is not found' 00:21:13.320 Process with pid 1488951 is not found 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.320 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.225 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.484 00:21:15.484 real 0m9.794s 00:21:15.484 user 0m24.983s 00:21:15.484 sys 0m5.165s 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 ************************************ 00:21:15.484 END TEST nvmf_shutdown_tc4 00:21:15.484 ************************************ 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:15.484 00:21:15.484 real 0m39.827s 00:21:15.484 user 1m35.656s 00:21:15.484 sys 0m13.912s 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 ************************************ 00:21:15.484 END TEST nvmf_shutdown 00:21:15.484 ************************************ 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 ************************************ 00:21:15.484 START TEST nvmf_nsid 00:21:15.484 ************************************ 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:15.484 * Looking for test storage... 00:21:15.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:15.484 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:15.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.744 --rc genhtml_branch_coverage=1 00:21:15.744 --rc genhtml_function_coverage=1 00:21:15.744 --rc genhtml_legend=1 00:21:15.744 --rc geninfo_all_blocks=1 00:21:15.744 --rc geninfo_unexecuted_blocks=1 00:21:15.744 00:21:15.744 ' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:15.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.744 --rc genhtml_branch_coverage=1 00:21:15.744 --rc genhtml_function_coverage=1 00:21:15.744 --rc genhtml_legend=1 00:21:15.744 --rc geninfo_all_blocks=1 00:21:15.744 --rc geninfo_unexecuted_blocks=1 00:21:15.744 00:21:15.744 ' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:15.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.744 --rc genhtml_branch_coverage=1 00:21:15.744 --rc genhtml_function_coverage=1 00:21:15.744 --rc genhtml_legend=1 00:21:15.744 --rc geninfo_all_blocks=1 00:21:15.744 --rc geninfo_unexecuted_blocks=1 00:21:15.744 00:21:15.744 ' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:15.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.744 --rc genhtml_branch_coverage=1 00:21:15.744 --rc genhtml_function_coverage=1 00:21:15.744 --rc genhtml_legend=1 00:21:15.744 --rc geninfo_all_blocks=1 00:21:15.744 --rc geninfo_unexecuted_blocks=1 00:21:15.744 00:21:15.744 ' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.744 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.745 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:22.314 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:22.314 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.314 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:22.315 Found net devices under 0000:af:00.0: cvl_0_0 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:22.315 Found net devices under 0000:af:00.1: cvl_0_1 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:22.315 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:21:22.315 00:21:22.315 --- 10.0.0.2 ping statistics --- 00:21:22.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.315 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:21:22.315 00:21:22.315 --- 10.0.0.1 ping statistics --- 00:21:22.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.315 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1493632 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1493632 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1493632 ']' 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.315 [2024-12-09 15:14:23.358171] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:22.315 [2024-12-09 15:14:23.358233] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.315 [2024-12-09 15:14:23.437307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.315 [2024-12-09 15:14:23.476881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.315 [2024-12-09 15:14:23.476915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.315 [2024-12-09 15:14:23.476925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.315 [2024-12-09 15:14:23.476931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.315 [2024-12-09 15:14:23.476936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.315 [2024-12-09 15:14:23.477460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1493659 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:22.315 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5c093ee6-0168-4818-a70b-4e2982ccd2fc 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=52987c6a-443e-4a96-ab69-492efe27d07a 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d79085e1-fbe2-4b2c-b2a9-38581cd09d37 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.316 null0 00:21:22.316 null1 00:21:22.316 [2024-12-09 15:14:23.659997] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:22.316 [2024-12-09 15:14:23.660038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493659 ] 00:21:22.316 null2 00:21:22.316 [2024-12-09 15:14:23.664976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.316 [2024-12-09 15:14:23.689150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1493659 /var/tmp/tgt2.sock 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1493659 ']' 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:22.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.316 [2024-12-09 15:14:23.732661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.316 [2024-12-09 15:14:23.771708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:22.316 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:22.575 [2024-12-09 15:14:24.305520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.575 [2024-12-09 15:14:24.321598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:22.575 nvme0n1 nvme0n2 00:21:22.575 nvme1n1 00:21:22.833 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:22.833 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:22.833 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:23.769 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5c093ee6-0168-4818-a70b-4e2982ccd2fc 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:24.705 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5c093ee601684818a70b4e2982ccd2fc 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5C093EE601684818A70B4E2982CCD2FC 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5C093EE601684818A70B4E2982CCD2FC == \5\C\0\9\3\E\E\6\0\1\6\8\4\8\1\8\A\7\0\B\4\E\2\9\8\2\C\C\D\2\F\C ]] 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 52987c6a-443e-4a96-ab69-492efe27d07a 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=52987c6a443e4a96ab69492efe27d07a 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 52987C6A443E4A96AB69492EFE27D07A 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 52987C6A443E4A96AB69492EFE27D07A == \5\2\9\8\7\C\6\A\4\4\3\E\4\A\9\6\A\B\6\9\4\9\2\E\F\E\2\7\D\0\7\A ]] 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d79085e1-fbe2-4b2c-b2a9-38581cd09d37 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d79085e1fbe24b2cb2a938581cd09d37 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D79085E1FBE24B2CB2A938581CD09D37 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D79085E1FBE24B2CB2A938581CD09D37 == \D\7\9\0\8\5\E\1\F\B\E\2\4\B\2\C\B\2\A\9\3\8\5\8\1\C\D\0\9\D\3\7 ]] 00:21:24.966 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:25.293 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:25.293 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:25.293 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1493659 00:21:25.293 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1493659 ']' 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1493659 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493659 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493659' 00:21:25.294 killing process with pid 1493659 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1493659 00:21:25.294 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1493659 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.580 rmmod nvme_tcp 00:21:25.580 rmmod nvme_fabrics 00:21:25.580 rmmod nvme_keyring 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1493632 ']' 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1493632 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1493632 ']' 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1493632 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493632 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493632' 00:21:25.580 killing process with pid 1493632 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1493632 00:21:25.580 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1493632 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.839 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:28.376 00:21:28.376 real 0m12.407s 00:21:28.376 user 0m9.584s 00:21:28.376 sys 0m5.539s 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 ************************************ 00:21:28.376 END TEST nvmf_nsid 00:21:28.376 ************************************ 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:28.376 00:21:28.376 real 11m58.111s 00:21:28.376 user 25m33.114s 00:21:28.376 sys 3m42.732s 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.376 15:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 ************************************ 00:21:28.376 END TEST nvmf_target_extra 00:21:28.376 ************************************ 00:21:28.376 15:14:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:28.376 15:14:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.376 15:14:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.376 15:14:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 ************************************ 00:21:28.376 START TEST nvmf_host 00:21:28.376 ************************************ 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:28.376 * Looking for test storage... 00:21:28.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:28.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.376 --rc genhtml_branch_coverage=1 00:21:28.376 --rc genhtml_function_coverage=1 00:21:28.376 --rc genhtml_legend=1 00:21:28.376 --rc geninfo_all_blocks=1 00:21:28.376 --rc geninfo_unexecuted_blocks=1 00:21:28.376 00:21:28.376 ' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:28.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.376 --rc genhtml_branch_coverage=1 00:21:28.376 --rc genhtml_function_coverage=1 00:21:28.376 --rc genhtml_legend=1 00:21:28.376 --rc geninfo_all_blocks=1 00:21:28.376 --rc geninfo_unexecuted_blocks=1 00:21:28.376 00:21:28.376 ' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:28.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.376 --rc genhtml_branch_coverage=1 00:21:28.376 --rc genhtml_function_coverage=1 00:21:28.376 --rc genhtml_legend=1 00:21:28.376 --rc geninfo_all_blocks=1 00:21:28.376 --rc geninfo_unexecuted_blocks=1 00:21:28.376 00:21:28.376 ' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:28.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.376 --rc genhtml_branch_coverage=1 00:21:28.376 --rc genhtml_function_coverage=1 00:21:28.376 --rc genhtml_legend=1 00:21:28.376 --rc geninfo_all_blocks=1 00:21:28.376 --rc geninfo_unexecuted_blocks=1 00:21:28.376 00:21:28.376 ' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.376 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.377 ************************************ 00:21:28.377 START TEST nvmf_multicontroller 00:21:28.377 ************************************ 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:28.377 * Looking for test storage... 00:21:28.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:28.377 15:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.377 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.378 15:14:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:34.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:34.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:34.955 Found net devices under 0000:af:00.0: cvl_0_0 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:34.955 Found net devices under 0000:af:00.1: cvl_0_1 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:34.955 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.956 15:14:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:34.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:21:34.956 00:21:34.956 --- 10.0.0.2 ping statistics --- 00:21:34.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.956 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:21:34.956 00:21:34.956 --- 10.0.0.1 ping statistics --- 00:21:34.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.956 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1497937 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1497937 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1497937 ']' 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 [2024-12-09 15:14:36.122520] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:34.956 [2024-12-09 15:14:36.122560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.956 [2024-12-09 15:14:36.199507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:34.956 [2024-12-09 15:14:36.237650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.956 [2024-12-09 15:14:36.237687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.956 [2024-12-09 15:14:36.237694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.956 [2024-12-09 15:14:36.237700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.956 [2024-12-09 15:14:36.237705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.956 [2024-12-09 15:14:36.238965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.956 [2024-12-09 15:14:36.239069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.956 [2024-12-09 15:14:36.239070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 [2024-12-09 15:14:36.382371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 Malloc0 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 [2024-12-09 15:14:36.442814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 [2024-12-09 15:14:36.450731] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 Malloc1 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.956 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1497961 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1497961 /var/tmp/bdevperf.sock 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1497961 ']' 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.957 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 NVMe0n1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.216 1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 request: 00:21:35.216 { 00:21:35.216 "name": "NVMe0", 00:21:35.216 "trtype": "tcp", 00:21:35.216 "traddr": "10.0.0.2", 00:21:35.216 "adrfam": "ipv4", 00:21:35.216 "trsvcid": "4420", 00:21:35.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.216 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:35.216 "hostaddr": "10.0.0.1", 00:21:35.216 "prchk_reftag": false, 00:21:35.216 "prchk_guard": false, 00:21:35.216 "hdgst": false, 00:21:35.216 "ddgst": false, 00:21:35.216 "allow_unrecognized_csi": false, 00:21:35.216 "method": "bdev_nvme_attach_controller", 00:21:35.216 "req_id": 1 00:21:35.216 } 00:21:35.216 Got JSON-RPC error response 00:21:35.216 response: 00:21:35.216 { 00:21:35.216 "code": -114, 00:21:35.216 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:35.216 } 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 request: 00:21:35.216 { 00:21:35.216 "name": "NVMe0", 00:21:35.216 "trtype": "tcp", 00:21:35.216 "traddr": "10.0.0.2", 00:21:35.216 "adrfam": "ipv4", 00:21:35.216 "trsvcid": "4420", 00:21:35.216 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:35.216 "hostaddr": "10.0.0.1", 00:21:35.216 "prchk_reftag": false, 00:21:35.216 "prchk_guard": false, 00:21:35.216 "hdgst": false, 00:21:35.216 "ddgst": false, 00:21:35.216 "allow_unrecognized_csi": false, 00:21:35.216 "method": "bdev_nvme_attach_controller", 00:21:35.216 "req_id": 1 00:21:35.216 } 00:21:35.216 Got JSON-RPC error response 00:21:35.216 response: 00:21:35.216 { 00:21:35.216 "code": -114, 00:21:35.216 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:35.216 } 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.216 request: 00:21:35.216 { 00:21:35.216 "name": "NVMe0", 00:21:35.216 "trtype": "tcp", 00:21:35.216 "traddr": "10.0.0.2", 00:21:35.216 "adrfam": "ipv4", 00:21:35.216 "trsvcid": "4420", 00:21:35.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.216 "hostaddr": "10.0.0.1", 00:21:35.216 "prchk_reftag": false, 00:21:35.216 "prchk_guard": false, 00:21:35.216 "hdgst": false, 00:21:35.216 "ddgst": false, 00:21:35.216 "multipath": "disable", 00:21:35.216 "allow_unrecognized_csi": false, 00:21:35.216 "method": "bdev_nvme_attach_controller", 00:21:35.216 "req_id": 1 00:21:35.216 } 00:21:35.216 Got JSON-RPC error response 00:21:35.216 response: 00:21:35.216 { 00:21:35.216 "code": -114, 00:21:35.216 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:35.216 } 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:35.216 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.217 request: 00:21:35.217 { 00:21:35.217 "name": "NVMe0", 00:21:35.217 "trtype": "tcp", 00:21:35.217 "traddr": "10.0.0.2", 00:21:35.217 "adrfam": "ipv4", 00:21:35.217 "trsvcid": "4420", 00:21:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.217 "hostaddr": "10.0.0.1", 00:21:35.217 "prchk_reftag": false, 00:21:35.217 "prchk_guard": false, 00:21:35.217 "hdgst": false, 00:21:35.217 "ddgst": false, 00:21:35.217 "multipath": "failover", 00:21:35.217 "allow_unrecognized_csi": false, 00:21:35.217 "method": "bdev_nvme_attach_controller", 00:21:35.217 "req_id": 1 00:21:35.217 } 00:21:35.217 Got JSON-RPC error response 00:21:35.217 response: 00:21:35.217 { 00:21:35.217 "code": -114, 00:21:35.217 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:35.217 } 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.217 15:14:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.475 NVMe0n1 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.475 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:35.475 15:14:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.852 { 00:21:36.852 "results": [ 00:21:36.852 { 00:21:36.852 "job": "NVMe0n1", 00:21:36.852 "core_mask": "0x1", 00:21:36.852 "workload": "write", 00:21:36.852 "status": "finished", 00:21:36.852 "queue_depth": 128, 00:21:36.852 "io_size": 4096, 00:21:36.852 "runtime": 1.004634, 00:21:36.852 "iops": 25353.511826197402, 00:21:36.852 "mibps": 99.0371555710836, 00:21:36.852 "io_failed": 0, 00:21:36.852 "io_timeout": 0, 00:21:36.852 "avg_latency_us": 5042.056992695709, 00:21:36.852 "min_latency_us": 2980.327619047619, 00:21:36.852 "max_latency_us": 12233.386666666667 00:21:36.852 } 00:21:36.852 ], 00:21:36.852 "core_count": 1 00:21:36.852 } 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1497961 ']' 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497961' 00:21:36.852 killing process with pid 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1497961 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:36.852 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:37.111 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:37.111 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:37.111 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:37.111 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:37.111 [2024-12-09 15:14:36.555553] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:37.111 [2024-12-09 15:14:36.555602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497961 ] 00:21:37.111 [2024-12-09 15:14:36.626929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.111 [2024-12-09 15:14:36.666255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.111 [2024-12-09 15:14:37.246812] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9a88c2fe-e5d2-44b8-94ea-4cbcdb2cd6fb already exists 00:21:37.112 [2024-12-09 15:14:37.246840] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9a88c2fe-e5d2-44b8-94ea-4cbcdb2cd6fb alias for bdev NVMe1n1 00:21:37.112 [2024-12-09 15:14:37.246848] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:37.112 Running I/O for 1 seconds... 00:21:37.112 25343.00 IOPS, 99.00 MiB/s 00:21:37.112 Latency(us) 00:21:37.112 [2024-12-09T14:14:38.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.112 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:37.112 NVMe0n1 : 1.00 25353.51 99.04 0.00 0.00 5042.06 2980.33 12233.39 00:21:37.112 [2024-12-09T14:14:38.907Z] =================================================================================================================== 00:21:37.112 [2024-12-09T14:14:38.907Z] Total : 25353.51 99.04 0.00 0.00 5042.06 2980.33 12233.39 00:21:37.112 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.112 00:21:37.112 Latency(us) 00:21:37.112 [2024-12-09T14:14:38.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.112 [2024-12-09T14:14:38.907Z] =================================================================================================================== 00:21:37.112 [2024-12-09T14:14:38.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.112 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.112 rmmod nvme_tcp 00:21:37.112 rmmod nvme_fabrics 00:21:37.112 rmmod nvme_keyring 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1497937 ']' 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1497937 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1497937 ']' 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1497937 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1497937 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1497937' 00:21:37.112 killing process with pid 1497937 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1497937 00:21:37.112 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1497937 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.371 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.372 15:14:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.276 15:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.276 00:21:39.276 real 0m11.136s 00:21:39.276 user 0m12.113s 00:21:39.276 sys 0m5.177s 00:21:39.276 15:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.276 15:14:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.276 ************************************ 00:21:39.276 END TEST nvmf_multicontroller 00:21:39.276 ************************************ 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.535 ************************************ 00:21:39.535 START TEST nvmf_aer 00:21:39.535 ************************************ 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:39.535 * Looking for test storage... 00:21:39.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:39.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.535 --rc genhtml_branch_coverage=1 00:21:39.535 --rc genhtml_function_coverage=1 00:21:39.535 --rc genhtml_legend=1 00:21:39.535 --rc geninfo_all_blocks=1 00:21:39.535 --rc geninfo_unexecuted_blocks=1 00:21:39.535 00:21:39.535 ' 00:21:39.535 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:39.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.535 --rc genhtml_branch_coverage=1 00:21:39.536 --rc genhtml_function_coverage=1 00:21:39.536 --rc genhtml_legend=1 00:21:39.536 --rc geninfo_all_blocks=1 00:21:39.536 --rc geninfo_unexecuted_blocks=1 00:21:39.536 00:21:39.536 ' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.536 --rc genhtml_branch_coverage=1 00:21:39.536 --rc genhtml_function_coverage=1 00:21:39.536 --rc genhtml_legend=1 00:21:39.536 --rc geninfo_all_blocks=1 00:21:39.536 --rc geninfo_unexecuted_blocks=1 00:21:39.536 00:21:39.536 ' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.536 --rc genhtml_branch_coverage=1 00:21:39.536 --rc genhtml_function_coverage=1 00:21:39.536 --rc genhtml_legend=1 00:21:39.536 --rc geninfo_all_blocks=1 00:21:39.536 --rc geninfo_unexecuted_blocks=1 00:21:39.536 00:21:39.536 ' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.536 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.795 15:14:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:46.365 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.365 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:46.366 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:46.366 Found net devices under 0000:af:00.0: cvl_0_0 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:46.366 Found net devices under 0000:af:00.1: cvl_0_1 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.366 15:14:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:46.366 00:21:46.366 --- 10.0.0.2 ping statistics --- 00:21:46.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.366 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:46.366 00:21:46.366 --- 10.0.0.1 ping statistics --- 00:21:46.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.366 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1501915 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1501915 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1501915 ']' 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 [2024-12-09 15:14:47.338887] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:46.366 [2024-12-09 15:14:47.338932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.366 [2024-12-09 15:14:47.417105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.366 [2024-12-09 15:14:47.458025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.366 [2024-12-09 15:14:47.458061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.366 [2024-12-09 15:14:47.458068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.366 [2024-12-09 15:14:47.458074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.366 [2024-12-09 15:14:47.458079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.366 [2024-12-09 15:14:47.459552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.366 [2024-12-09 15:14:47.459660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.366 [2024-12-09 15:14:47.459770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.366 [2024-12-09 15:14:47.459770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 [2024-12-09 15:14:47.596069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 Malloc0 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.366 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 [2024-12-09 15:14:47.655764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 [ 00:21:46.367 { 00:21:46.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:46.367 "subtype": "Discovery", 00:21:46.367 "listen_addresses": [], 00:21:46.367 "allow_any_host": true, 00:21:46.367 "hosts": [] 00:21:46.367 }, 00:21:46.367 { 00:21:46.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.367 "subtype": "NVMe", 00:21:46.367 "listen_addresses": [ 00:21:46.367 { 00:21:46.367 "trtype": "TCP", 00:21:46.367 "adrfam": "IPv4", 00:21:46.367 "traddr": "10.0.0.2", 00:21:46.367 "trsvcid": "4420" 00:21:46.367 } 00:21:46.367 ], 00:21:46.367 "allow_any_host": true, 00:21:46.367 "hosts": [], 00:21:46.367 "serial_number": "SPDK00000000000001", 00:21:46.367 "model_number": "SPDK bdev Controller", 00:21:46.367 "max_namespaces": 2, 00:21:46.367 "min_cntlid": 1, 00:21:46.367 "max_cntlid": 65519, 00:21:46.367 "namespaces": [ 00:21:46.367 { 00:21:46.367 "nsid": 1, 00:21:46.367 "bdev_name": "Malloc0", 00:21:46.367 "name": "Malloc0", 00:21:46.367 "nguid": "3D7B63F0A04F410DBF1A45E3241CCB56", 00:21:46.367 "uuid": "3d7b63f0-a04f-410d-bf1a-45e3241ccb56" 00:21:46.367 } 00:21:46.367 ] 00:21:46.367 } 00:21:46.367 ] 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1501941 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 Malloc1 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 Asynchronous Event Request test 00:21:46.367 Attaching to 10.0.0.2 00:21:46.367 Attached to 10.0.0.2 00:21:46.367 Registering asynchronous event callbacks... 00:21:46.367 Starting namespace attribute notice tests for all controllers... 00:21:46.367 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:46.367 aer_cb - Changed Namespace 00:21:46.367 Cleaning up... 00:21:46.367 [ 00:21:46.367 { 00:21:46.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:46.367 "subtype": "Discovery", 00:21:46.367 "listen_addresses": [], 00:21:46.367 "allow_any_host": true, 00:21:46.367 "hosts": [] 00:21:46.367 }, 00:21:46.367 { 00:21:46.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.367 "subtype": "NVMe", 00:21:46.367 "listen_addresses": [ 00:21:46.367 { 00:21:46.367 "trtype": "TCP", 00:21:46.367 "adrfam": "IPv4", 00:21:46.367 "traddr": "10.0.0.2", 00:21:46.367 "trsvcid": "4420" 00:21:46.367 } 00:21:46.367 ], 00:21:46.367 "allow_any_host": true, 00:21:46.367 "hosts": [], 00:21:46.367 "serial_number": "SPDK00000000000001", 00:21:46.367 "model_number": "SPDK bdev Controller", 00:21:46.367 "max_namespaces": 2, 00:21:46.367 "min_cntlid": 1, 00:21:46.367 "max_cntlid": 65519, 00:21:46.367 "namespaces": [ 00:21:46.367 { 00:21:46.367 "nsid": 1, 00:21:46.367 "bdev_name": "Malloc0", 00:21:46.367 "name": "Malloc0", 00:21:46.367 "nguid": "3D7B63F0A04F410DBF1A45E3241CCB56", 00:21:46.367 "uuid": "3d7b63f0-a04f-410d-bf1a-45e3241ccb56" 00:21:46.367 }, 00:21:46.367 { 00:21:46.367 "nsid": 2, 00:21:46.367 "bdev_name": "Malloc1", 00:21:46.367 "name": "Malloc1", 00:21:46.367 "nguid": "5089FA6966FB48AC8FA6942393E8B197", 00:21:46.367 "uuid": "5089fa69-66fb-48ac-8fa6-942393e8b197" 00:21:46.367 } 00:21:46.367 ] 00:21:46.367 } 00:21:46.367 ] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1501941 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.367 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.367 rmmod nvme_tcp 00:21:46.367 rmmod nvme_fabrics 00:21:46.367 rmmod nvme_keyring 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1501915 ']' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1501915 ']' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501915' 00:21:46.626 killing process with pid 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1501915 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.626 15:14:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.163 00:21:49.163 real 0m9.361s 00:21:49.163 user 0m5.457s 00:21:49.163 sys 0m4.875s 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.163 ************************************ 00:21:49.163 END TEST nvmf_aer 00:21:49.163 ************************************ 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.163 ************************************ 00:21:49.163 START TEST nvmf_async_init 00:21:49.163 ************************************ 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:49.163 * Looking for test storage... 00:21:49.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:49.163 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.164 --rc genhtml_branch_coverage=1 00:21:49.164 --rc genhtml_function_coverage=1 00:21:49.164 --rc genhtml_legend=1 00:21:49.164 --rc geninfo_all_blocks=1 00:21:49.164 --rc geninfo_unexecuted_blocks=1 00:21:49.164 00:21:49.164 ' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.164 --rc genhtml_branch_coverage=1 00:21:49.164 --rc genhtml_function_coverage=1 00:21:49.164 --rc genhtml_legend=1 00:21:49.164 --rc geninfo_all_blocks=1 00:21:49.164 --rc geninfo_unexecuted_blocks=1 00:21:49.164 00:21:49.164 ' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.164 --rc genhtml_branch_coverage=1 00:21:49.164 --rc genhtml_function_coverage=1 00:21:49.164 --rc genhtml_legend=1 00:21:49.164 --rc geninfo_all_blocks=1 00:21:49.164 --rc geninfo_unexecuted_blocks=1 00:21:49.164 00:21:49.164 ' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.164 --rc genhtml_branch_coverage=1 00:21:49.164 --rc genhtml_function_coverage=1 00:21:49.164 --rc genhtml_legend=1 00:21:49.164 --rc geninfo_all_blocks=1 00:21:49.164 --rc geninfo_unexecuted_blocks=1 00:21:49.164 00:21:49.164 ' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2947932ddfa2493184e30a33f871ecc4 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.164 15:14:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:55.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:55.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:55.736 Found net devices under 0000:af:00.0: cvl_0_0 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:55.736 Found net devices under 0000:af:00.1: cvl_0_1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:55.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:21:55.736 00:21:55.736 --- 10.0.0.2 ping statistics --- 00:21:55.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.736 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:21:55.736 00:21:55.736 --- 10.0.0.1 ping statistics --- 00:21:55.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.736 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.736 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1505437 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1505437 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1505437 ']' 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [2024-12-09 15:14:56.702420] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:21:55.737 [2024-12-09 15:14:56.702464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.737 [2024-12-09 15:14:56.780155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.737 [2024-12-09 15:14:56.820371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.737 [2024-12-09 15:14:56.820407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.737 [2024-12-09 15:14:56.820414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.737 [2024-12-09 15:14:56.820420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.737 [2024-12-09 15:14:56.820425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.737 [2024-12-09 15:14:56.820944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [2024-12-09 15:14:56.955915] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 null0 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2947932ddfa2493184e30a33f871ecc4 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.737 15:14:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [2024-12-09 15:14:57.008167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 nvme0n1 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [ 00:21:55.737 { 00:21:55.737 "name": "nvme0n1", 00:21:55.737 "aliases": [ 00:21:55.737 "2947932d-dfa2-4931-84e3-0a33f871ecc4" 00:21:55.737 ], 00:21:55.737 "product_name": "NVMe disk", 00:21:55.737 "block_size": 512, 00:21:55.737 "num_blocks": 2097152, 00:21:55.737 "uuid": "2947932d-dfa2-4931-84e3-0a33f871ecc4", 00:21:55.737 "numa_id": 1, 00:21:55.737 "assigned_rate_limits": { 00:21:55.737 "rw_ios_per_sec": 0, 00:21:55.737 "rw_mbytes_per_sec": 0, 00:21:55.737 "r_mbytes_per_sec": 0, 00:21:55.737 "w_mbytes_per_sec": 0 00:21:55.737 }, 00:21:55.737 "claimed": false, 00:21:55.737 "zoned": false, 00:21:55.737 "supported_io_types": { 00:21:55.737 "read": true, 00:21:55.737 "write": true, 00:21:55.737 "unmap": false, 00:21:55.737 "flush": true, 00:21:55.737 "reset": true, 00:21:55.737 "nvme_admin": true, 00:21:55.737 "nvme_io": true, 00:21:55.737 "nvme_io_md": false, 00:21:55.737 "write_zeroes": true, 00:21:55.737 "zcopy": false, 00:21:55.737 "get_zone_info": false, 00:21:55.737 "zone_management": false, 00:21:55.737 "zone_append": false, 00:21:55.737 "compare": true, 00:21:55.737 "compare_and_write": true, 00:21:55.737 "abort": true, 00:21:55.737 "seek_hole": false, 00:21:55.737 "seek_data": false, 00:21:55.737 "copy": true, 00:21:55.737 "nvme_iov_md": false 00:21:55.737 }, 00:21:55.737 "memory_domains": [ 00:21:55.737 { 00:21:55.737 "dma_device_id": "system", 00:21:55.737 "dma_device_type": 1 00:21:55.737 } 00:21:55.737 ], 00:21:55.737 "driver_specific": { 00:21:55.737 "nvme": [ 00:21:55.737 { 00:21:55.737 "trid": { 00:21:55.737 "trtype": "TCP", 00:21:55.737 "adrfam": "IPv4", 00:21:55.737 "traddr": "10.0.0.2", 00:21:55.737 "trsvcid": "4420", 00:21:55.737 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.737 }, 00:21:55.737 "ctrlr_data": { 00:21:55.737 "cntlid": 1, 00:21:55.737 "vendor_id": "0x8086", 00:21:55.737 "model_number": "SPDK bdev Controller", 00:21:55.737 "serial_number": "00000000000000000000", 00:21:55.737 "firmware_revision": "25.01", 00:21:55.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.737 "oacs": { 00:21:55.737 "security": 0, 00:21:55.737 "format": 0, 00:21:55.737 "firmware": 0, 00:21:55.737 "ns_manage": 0 00:21:55.737 }, 00:21:55.737 "multi_ctrlr": true, 00:21:55.737 "ana_reporting": false 00:21:55.737 }, 00:21:55.737 "vs": { 00:21:55.737 "nvme_version": "1.3" 00:21:55.737 }, 00:21:55.737 "ns_data": { 00:21:55.737 "id": 1, 00:21:55.737 "can_share": true 00:21:55.737 } 00:21:55.737 } 00:21:55.737 ], 00:21:55.737 "mp_policy": "active_passive" 00:21:55.737 } 00:21:55.737 } 00:21:55.737 ] 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [2024-12-09 15:14:57.272732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:55.737 [2024-12-09 15:14:57.272785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2284550 (9): Bad file descriptor 00:21:55.737 [2024-12-09 15:14:57.405291] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.737 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.737 [ 00:21:55.737 { 00:21:55.737 "name": "nvme0n1", 00:21:55.737 "aliases": [ 00:21:55.737 "2947932d-dfa2-4931-84e3-0a33f871ecc4" 00:21:55.737 ], 00:21:55.737 "product_name": "NVMe disk", 00:21:55.737 "block_size": 512, 00:21:55.738 "num_blocks": 2097152, 00:21:55.738 "uuid": "2947932d-dfa2-4931-84e3-0a33f871ecc4", 00:21:55.738 "numa_id": 1, 00:21:55.738 "assigned_rate_limits": { 00:21:55.738 "rw_ios_per_sec": 0, 00:21:55.738 "rw_mbytes_per_sec": 0, 00:21:55.738 "r_mbytes_per_sec": 0, 00:21:55.738 "w_mbytes_per_sec": 0 00:21:55.738 }, 00:21:55.738 "claimed": false, 00:21:55.738 "zoned": false, 00:21:55.738 "supported_io_types": { 00:21:55.738 "read": true, 00:21:55.738 "write": true, 00:21:55.738 "unmap": false, 00:21:55.738 "flush": true, 00:21:55.738 "reset": true, 00:21:55.738 "nvme_admin": true, 00:21:55.738 "nvme_io": true, 00:21:55.738 "nvme_io_md": false, 00:21:55.738 "write_zeroes": true, 00:21:55.738 "zcopy": false, 00:21:55.738 "get_zone_info": false, 00:21:55.738 "zone_management": false, 00:21:55.738 "zone_append": false, 00:21:55.738 "compare": true, 00:21:55.738 "compare_and_write": true, 00:21:55.738 "abort": true, 00:21:55.738 "seek_hole": false, 00:21:55.738 "seek_data": false, 00:21:55.738 "copy": true, 00:21:55.738 "nvme_iov_md": false 00:21:55.738 }, 00:21:55.738 "memory_domains": [ 00:21:55.738 { 00:21:55.738 "dma_device_id": "system", 00:21:55.738 "dma_device_type": 1 00:21:55.738 } 00:21:55.738 ], 00:21:55.738 "driver_specific": { 00:21:55.738 "nvme": [ 00:21:55.738 { 00:21:55.738 "trid": { 00:21:55.738 "trtype": "TCP", 00:21:55.738 "adrfam": "IPv4", 00:21:55.738 "traddr": "10.0.0.2", 00:21:55.738 "trsvcid": "4420", 00:21:55.738 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.738 }, 00:21:55.738 "ctrlr_data": { 00:21:55.738 "cntlid": 2, 00:21:55.738 "vendor_id": "0x8086", 00:21:55.738 "model_number": "SPDK bdev Controller", 00:21:55.738 "serial_number": "00000000000000000000", 00:21:55.738 "firmware_revision": "25.01", 00:21:55.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.738 "oacs": { 00:21:55.738 "security": 0, 00:21:55.738 "format": 0, 00:21:55.738 "firmware": 0, 00:21:55.738 "ns_manage": 0 00:21:55.738 }, 00:21:55.738 "multi_ctrlr": true, 00:21:55.738 "ana_reporting": false 00:21:55.738 }, 00:21:55.738 "vs": { 00:21:55.738 "nvme_version": "1.3" 00:21:55.738 }, 00:21:55.738 "ns_data": { 00:21:55.738 "id": 1, 00:21:55.738 "can_share": true 00:21:55.738 } 00:21:55.738 } 00:21:55.738 ], 00:21:55.738 "mp_policy": "active_passive" 00:21:55.738 } 00:21:55.738 } 00:21:55.738 ] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.645lW2z74d 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.645lW2z74d 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.645lW2z74d 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 [2024-12-09 15:14:57.481355] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.738 [2024-12-09 15:14:57.481445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.738 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.738 [2024-12-09 15:14:57.501424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.998 nvme0n1 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.998 [ 00:21:55.998 { 00:21:55.998 "name": "nvme0n1", 00:21:55.998 "aliases": [ 00:21:55.998 "2947932d-dfa2-4931-84e3-0a33f871ecc4" 00:21:55.998 ], 00:21:55.998 "product_name": "NVMe disk", 00:21:55.998 "block_size": 512, 00:21:55.998 "num_blocks": 2097152, 00:21:55.998 "uuid": "2947932d-dfa2-4931-84e3-0a33f871ecc4", 00:21:55.998 "numa_id": 1, 00:21:55.998 "assigned_rate_limits": { 00:21:55.998 "rw_ios_per_sec": 0, 00:21:55.998 "rw_mbytes_per_sec": 0, 00:21:55.998 "r_mbytes_per_sec": 0, 00:21:55.998 "w_mbytes_per_sec": 0 00:21:55.998 }, 00:21:55.998 "claimed": false, 00:21:55.998 "zoned": false, 00:21:55.998 "supported_io_types": { 00:21:55.998 "read": true, 00:21:55.998 "write": true, 00:21:55.998 "unmap": false, 00:21:55.998 "flush": true, 00:21:55.998 "reset": true, 00:21:55.998 "nvme_admin": true, 00:21:55.998 "nvme_io": true, 00:21:55.998 "nvme_io_md": false, 00:21:55.998 "write_zeroes": true, 00:21:55.998 "zcopy": false, 00:21:55.998 "get_zone_info": false, 00:21:55.998 "zone_management": false, 00:21:55.998 "zone_append": false, 00:21:55.998 "compare": true, 00:21:55.998 "compare_and_write": true, 00:21:55.998 "abort": true, 00:21:55.998 "seek_hole": false, 00:21:55.998 "seek_data": false, 00:21:55.998 "copy": true, 00:21:55.998 "nvme_iov_md": false 00:21:55.998 }, 00:21:55.998 "memory_domains": [ 00:21:55.998 { 00:21:55.998 "dma_device_id": "system", 00:21:55.998 "dma_device_type": 1 00:21:55.998 } 00:21:55.998 ], 00:21:55.998 "driver_specific": { 00:21:55.998 "nvme": [ 00:21:55.998 { 00:21:55.998 "trid": { 00:21:55.998 "trtype": "TCP", 00:21:55.998 "adrfam": "IPv4", 00:21:55.998 "traddr": "10.0.0.2", 00:21:55.998 "trsvcid": "4421", 00:21:55.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.998 }, 00:21:55.998 "ctrlr_data": { 00:21:55.998 "cntlid": 3, 00:21:55.998 "vendor_id": "0x8086", 00:21:55.998 "model_number": "SPDK bdev Controller", 00:21:55.998 "serial_number": "00000000000000000000", 00:21:55.998 "firmware_revision": "25.01", 00:21:55.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.998 "oacs": { 00:21:55.998 "security": 0, 00:21:55.998 "format": 0, 00:21:55.998 "firmware": 0, 00:21:55.998 "ns_manage": 0 00:21:55.998 }, 00:21:55.998 "multi_ctrlr": true, 00:21:55.998 "ana_reporting": false 00:21:55.998 }, 00:21:55.998 "vs": { 00:21:55.998 "nvme_version": "1.3" 00:21:55.998 }, 00:21:55.998 "ns_data": { 00:21:55.998 "id": 1, 00:21:55.998 "can_share": true 00:21:55.998 } 00:21:55.998 } 00:21:55.998 ], 00:21:55.998 "mp_policy": "active_passive" 00:21:55.998 } 00:21:55.998 } 00:21:55.998 ] 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.645lW2z74d 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:55.998 rmmod nvme_tcp 00:21:55.998 rmmod nvme_fabrics 00:21:55.998 rmmod nvme_keyring 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1505437 ']' 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1505437 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1505437 ']' 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1505437 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505437 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505437' 00:21:55.998 killing process with pid 1505437 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1505437 00:21:55.998 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1505437 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.258 15:14:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.163 15:14:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:58.422 00:21:58.422 real 0m9.417s 00:21:58.422 user 0m3.070s 00:21:58.422 sys 0m4.759s 00:21:58.422 15:14:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.422 15:14:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.422 ************************************ 00:21:58.422 END TEST nvmf_async_init 00:21:58.422 ************************************ 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.422 ************************************ 00:21:58.422 START TEST dma 00:21:58.422 ************************************ 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.422 * Looking for test storage... 00:21:58.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.422 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.681 --rc genhtml_branch_coverage=1 00:21:58.681 --rc genhtml_function_coverage=1 00:21:58.681 --rc genhtml_legend=1 00:21:58.681 --rc geninfo_all_blocks=1 00:21:58.681 --rc geninfo_unexecuted_blocks=1 00:21:58.681 00:21:58.681 ' 00:21:58.681 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.682 --rc genhtml_branch_coverage=1 00:21:58.682 --rc genhtml_function_coverage=1 00:21:58.682 --rc genhtml_legend=1 00:21:58.682 --rc geninfo_all_blocks=1 00:21:58.682 --rc geninfo_unexecuted_blocks=1 00:21:58.682 00:21:58.682 ' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.682 --rc genhtml_branch_coverage=1 00:21:58.682 --rc genhtml_function_coverage=1 00:21:58.682 --rc genhtml_legend=1 00:21:58.682 --rc geninfo_all_blocks=1 00:21:58.682 --rc geninfo_unexecuted_blocks=1 00:21:58.682 00:21:58.682 ' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.682 --rc genhtml_branch_coverage=1 00:21:58.682 --rc genhtml_function_coverage=1 00:21:58.682 --rc genhtml_legend=1 00:21:58.682 --rc geninfo_all_blocks=1 00:21:58.682 --rc geninfo_unexecuted_blocks=1 00:21:58.682 00:21:58.682 ' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:58.682 00:21:58.682 real 0m0.210s 00:21:58.682 user 0m0.135s 00:21:58.682 sys 0m0.088s 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:58.682 ************************************ 00:21:58.682 END TEST dma 00:21:58.682 ************************************ 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.682 ************************************ 00:21:58.682 START TEST nvmf_identify 00:21:58.682 ************************************ 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.682 * Looking for test storage... 00:21:58.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.682 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.942 --rc genhtml_branch_coverage=1 00:21:58.942 --rc genhtml_function_coverage=1 00:21:58.942 --rc genhtml_legend=1 00:21:58.942 --rc geninfo_all_blocks=1 00:21:58.942 --rc geninfo_unexecuted_blocks=1 00:21:58.942 00:21:58.942 ' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.942 --rc genhtml_branch_coverage=1 00:21:58.942 --rc genhtml_function_coverage=1 00:21:58.942 --rc genhtml_legend=1 00:21:58.942 --rc geninfo_all_blocks=1 00:21:58.942 --rc geninfo_unexecuted_blocks=1 00:21:58.942 00:21:58.942 ' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.942 --rc genhtml_branch_coverage=1 00:21:58.942 --rc genhtml_function_coverage=1 00:21:58.942 --rc genhtml_legend=1 00:21:58.942 --rc geninfo_all_blocks=1 00:21:58.942 --rc geninfo_unexecuted_blocks=1 00:21:58.942 00:21:58.942 ' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.942 --rc genhtml_branch_coverage=1 00:21:58.942 --rc genhtml_function_coverage=1 00:21:58.942 --rc genhtml_legend=1 00:21:58.942 --rc geninfo_all_blocks=1 00:21:58.942 --rc geninfo_unexecuted_blocks=1 00:21:58.942 00:21:58.942 ' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.942 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:58.943 15:15:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:05.520 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.521 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.521 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:22:05.521 00:22:05.521 --- 10.0.0.2 ping statistics --- 00:22:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.521 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:05.521 00:22:05.521 --- 10.0.0.1 ping statistics --- 00:22:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.521 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1509511 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1509511 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1509511 ']' 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 [2024-12-09 15:15:06.549327] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:05.521 [2024-12-09 15:15:06.549370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.521 [2024-12-09 15:15:06.625852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.521 [2024-12-09 15:15:06.667550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.521 [2024-12-09 15:15:06.667589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.521 [2024-12-09 15:15:06.667597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.521 [2024-12-09 15:15:06.667603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.521 [2024-12-09 15:15:06.667608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.521 [2024-12-09 15:15:06.669052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.521 [2024-12-09 15:15:06.669159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.521 [2024-12-09 15:15:06.669272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.521 [2024-12-09 15:15:06.669273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 [2024-12-09 15:15:06.769691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 Malloc0 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 [2024-12-09 15:15:06.868365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.521 [ 00:22:05.521 { 00:22:05.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:05.521 "subtype": "Discovery", 00:22:05.521 "listen_addresses": [ 00:22:05.521 { 00:22:05.521 "trtype": "TCP", 00:22:05.521 "adrfam": "IPv4", 00:22:05.521 "traddr": "10.0.0.2", 00:22:05.521 "trsvcid": "4420" 00:22:05.521 } 00:22:05.521 ], 00:22:05.521 "allow_any_host": true, 00:22:05.521 "hosts": [] 00:22:05.521 }, 00:22:05.521 { 00:22:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.521 "subtype": "NVMe", 00:22:05.521 "listen_addresses": [ 00:22:05.521 { 00:22:05.521 "trtype": "TCP", 00:22:05.521 "adrfam": "IPv4", 00:22:05.521 "traddr": "10.0.0.2", 00:22:05.521 "trsvcid": "4420" 00:22:05.521 } 00:22:05.521 ], 00:22:05.521 "allow_any_host": true, 00:22:05.521 "hosts": [], 00:22:05.521 "serial_number": "SPDK00000000000001", 00:22:05.521 "model_number": "SPDK bdev Controller", 00:22:05.521 "max_namespaces": 32, 00:22:05.521 "min_cntlid": 1, 00:22:05.521 "max_cntlid": 65519, 00:22:05.521 "namespaces": [ 00:22:05.521 { 00:22:05.521 "nsid": 1, 00:22:05.521 "bdev_name": "Malloc0", 00:22:05.521 "name": "Malloc0", 00:22:05.521 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:05.521 "eui64": "ABCDEF0123456789", 00:22:05.521 "uuid": "d5e8a9ac-5851-4a40-b592-bb77be5699b9" 00:22:05.521 } 00:22:05.521 ] 00:22:05.521 } 00:22:05.521 ] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.521 15:15:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:05.521 [2024-12-09 15:15:06.920773] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:05.521 [2024-12-09 15:15:06.920813] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509706 ] 00:22:05.521 [2024-12-09 15:15:06.960987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:05.521 [2024-12-09 15:15:06.961031] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:05.521 [2024-12-09 15:15:06.961036] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:05.521 [2024-12-09 15:15:06.961049] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:05.521 [2024-12-09 15:15:06.961056] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:05.521 [2024-12-09 15:15:06.964421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:05.521 [2024-12-09 15:15:06.964452] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1027690 0 00:22:05.521 [2024-12-09 15:15:06.964544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:05.521 [2024-12-09 15:15:06.964552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:05.521 [2024-12-09 15:15:06.964559] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:05.521 [2024-12-09 15:15:06.964562] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:05.521 [2024-12-09 15:15:06.964591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.521 [2024-12-09 15:15:06.964599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.521 [2024-12-09 15:15:06.964603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.521 [2024-12-09 15:15:06.964613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:05.521 [2024-12-09 15:15:06.964625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.521 [2024-12-09 15:15:06.971227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971255] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:05.522 [2024-12-09 15:15:06.971261] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:05.522 [2024-12-09 15:15:06.971266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:05.522 [2024-12-09 15:15:06.971279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.971378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:05.522 [2024-12-09 15:15:06.971404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:05.522 [2024-12-09 15:15:06.971410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.971488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:05.522 [2024-12-09 15:15:06.971513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.971602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.971701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971718] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:05.522 [2024-12-09 15:15:06.971722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971836] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:05.522 [2024-12-09 15:15:06.971841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.971931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.971936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.971939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.971947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:05.522 [2024-12-09 15:15:06.971958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.971966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.971971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.971980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.972042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.972047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.972051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.972058] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:05.522 [2024-12-09 15:15:06.972062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972069] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:05.522 [2024-12-09 15:15:06.972075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.972103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.972182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.522 [2024-12-09 15:15:06.972188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.522 [2024-12-09 15:15:06.972191] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972195] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=0 00:22:05.522 [2024-12-09 15:15:06.972199] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089100) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:22:05.522 [2024-12-09 15:15:06.972203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.972238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.972241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.972253] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:05.522 [2024-12-09 15:15:06.972257] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:05.522 [2024-12-09 15:15:06.972261] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:05.522 [2024-12-09 15:15:06.972266] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:05.522 [2024-12-09 15:15:06.972272] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:05.522 [2024-12-09 15:15:06.972276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.522 [2024-12-09 15:15:06.972312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.972372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.972378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.972381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.972391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.522 [2024-12-09 15:15:06.972409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.522 [2024-12-09 15:15:06.972425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.522 [2024-12-09 15:15:06.972441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.522 [2024-12-09 15:15:06.972457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:05.522 [2024-12-09 15:15:06.972473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.972495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089100, cid 0, qid 0 00:22:05.522 [2024-12-09 15:15:06.972499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089280, cid 1, qid 0 00:22:05.522 [2024-12-09 15:15:06.972503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089400, cid 2, qid 0 00:22:05.522 [2024-12-09 15:15:06.972508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.522 [2024-12-09 15:15:06.972512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:22:05.522 [2024-12-09 15:15:06.972598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:06.972605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:06.972608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:06.972615] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:05.522 [2024-12-09 15:15:06.972620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:05.522 [2024-12-09 15:15:06.972629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:06.972638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:06.972647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:22:05.522 [2024-12-09 15:15:06.972713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.522 [2024-12-09 15:15:06.972719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.522 [2024-12-09 15:15:06.972722] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972725] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=4096, cccid=4 00:22:05.522 [2024-12-09 15:15:06.972729] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=4096 00:22:05.522 [2024-12-09 15:15:06.972733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972743] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:06.972747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:07.016237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:07.016240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:07.016255] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:05.522 [2024-12-09 15:15:07.016273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:07.016284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:07.016290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:07.016303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.522 [2024-12-09 15:15:07.016319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:22:05.522 [2024-12-09 15:15:07.016324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089880, cid 5, qid 0 00:22:05.522 [2024-12-09 15:15:07.016417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.522 [2024-12-09 15:15:07.016422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.522 [2024-12-09 15:15:07.016425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016428] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=1024, cccid=4 00:22:05.522 [2024-12-09 15:15:07.016432] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=1024 00:22:05.522 [2024-12-09 15:15:07.016436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016442] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:07.016454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:07.016457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.016460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089880) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:07.058273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.522 [2024-12-09 15:15:07.058283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.522 [2024-12-09 15:15:07.058287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:22:05.522 [2024-12-09 15:15:07.058299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:22:05.522 [2024-12-09 15:15:07.058309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.522 [2024-12-09 15:15:07.058323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:22:05.522 [2024-12-09 15:15:07.058393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.522 [2024-12-09 15:15:07.058400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.522 [2024-12-09 15:15:07.058404] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058407] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=3072, cccid=4 00:22:05.522 [2024-12-09 15:15:07.058411] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=3072 00:22:05.522 [2024-12-09 15:15:07.058415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058421] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.522 [2024-12-09 15:15:07.058437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.058443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.058446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.058449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.058456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.058460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.058469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.058482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089700, cid 4, qid 0 00:22:05.523 [2024-12-09 15:15:07.058548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.523 [2024-12-09 15:15:07.058554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.523 [2024-12-09 15:15:07.058559] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.058563] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1027690): datao=0, datal=8, cccid=4 00:22:05.523 [2024-12-09 15:15:07.058567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1089700) on tqpair(0x1027690): expected_datao=0, payload_size=8 00:22:05.523 [2024-12-09 15:15:07.058571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.058576] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.058579] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089700) on tqpair=0x1027690 00:22:05.523 ===================================================== 00:22:05.523 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:05.523 ===================================================== 00:22:05.523 Controller Capabilities/Features 00:22:05.523 ================================ 00:22:05.523 Vendor ID: 0000 00:22:05.523 Subsystem Vendor ID: 0000 00:22:05.523 Serial Number: .................... 00:22:05.523 Model Number: ........................................ 00:22:05.523 Firmware Version: 25.01 00:22:05.523 Recommended Arb Burst: 0 00:22:05.523 IEEE OUI Identifier: 00 00 00 00:22:05.523 Multi-path I/O 00:22:05.523 May have multiple subsystem ports: No 00:22:05.523 May have multiple controllers: No 00:22:05.523 Associated with SR-IOV VF: No 00:22:05.523 Max Data Transfer Size: 131072 00:22:05.523 Max Number of Namespaces: 0 00:22:05.523 Max Number of I/O Queues: 1024 00:22:05.523 NVMe Specification Version (VS): 1.3 00:22:05.523 NVMe Specification Version (Identify): 1.3 00:22:05.523 Maximum Queue Entries: 128 00:22:05.523 Contiguous Queues Required: Yes 00:22:05.523 Arbitration Mechanisms Supported 00:22:05.523 Weighted Round Robin: Not Supported 00:22:05.523 Vendor Specific: Not Supported 00:22:05.523 Reset Timeout: 15000 ms 00:22:05.523 Doorbell Stride: 4 bytes 00:22:05.523 NVM Subsystem Reset: Not Supported 00:22:05.523 Command Sets Supported 00:22:05.523 NVM Command Set: Supported 00:22:05.523 Boot Partition: Not Supported 00:22:05.523 Memory Page Size Minimum: 4096 bytes 00:22:05.523 Memory Page Size Maximum: 4096 bytes 00:22:05.523 Persistent Memory Region: Not Supported 00:22:05.523 Optional Asynchronous Events Supported 00:22:05.523 Namespace Attribute Notices: Not Supported 00:22:05.523 Firmware Activation Notices: Not Supported 00:22:05.523 ANA Change Notices: Not Supported 00:22:05.523 PLE Aggregate Log Change Notices: Not Supported 00:22:05.523 LBA Status Info Alert Notices: Not Supported 00:22:05.523 EGE Aggregate Log Change Notices: Not Supported 00:22:05.523 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.523 Zone Descriptor Change Notices: Not Supported 00:22:05.523 Discovery Log Change Notices: Supported 00:22:05.523 Controller Attributes 00:22:05.523 128-bit Host Identifier: Not Supported 00:22:05.523 Non-Operational Permissive Mode: Not Supported 00:22:05.523 NVM Sets: Not Supported 00:22:05.523 Read Recovery Levels: Not Supported 00:22:05.523 Endurance Groups: Not Supported 00:22:05.523 Predictable Latency Mode: Not Supported 00:22:05.523 Traffic Based Keep ALive: Not Supported 00:22:05.523 Namespace Granularity: Not Supported 00:22:05.523 SQ Associations: Not Supported 00:22:05.523 UUID List: Not Supported 00:22:05.523 Multi-Domain Subsystem: Not Supported 00:22:05.523 Fixed Capacity Management: Not Supported 00:22:05.523 Variable Capacity Management: Not Supported 00:22:05.523 Delete Endurance Group: Not Supported 00:22:05.523 Delete NVM Set: Not Supported 00:22:05.523 Extended LBA Formats Supported: Not Supported 00:22:05.523 Flexible Data Placement Supported: Not Supported 00:22:05.523 00:22:05.523 Controller Memory Buffer Support 00:22:05.523 ================================ 00:22:05.523 Supported: No 00:22:05.523 00:22:05.523 Persistent Memory Region Support 00:22:05.523 ================================ 00:22:05.523 Supported: No 00:22:05.523 00:22:05.523 Admin Command Set Attributes 00:22:05.523 ============================ 00:22:05.523 Security Send/Receive: Not Supported 00:22:05.523 Format NVM: Not Supported 00:22:05.523 Firmware Activate/Download: Not Supported 00:22:05.523 Namespace Management: Not Supported 00:22:05.523 Device Self-Test: Not Supported 00:22:05.523 Directives: Not Supported 00:22:05.523 NVMe-MI: Not Supported 00:22:05.523 Virtualization Management: Not Supported 00:22:05.523 Doorbell Buffer Config: Not Supported 00:22:05.523 Get LBA Status Capability: Not Supported 00:22:05.523 Command & Feature Lockdown Capability: Not Supported 00:22:05.523 Abort Command Limit: 1 00:22:05.523 Async Event Request Limit: 4 00:22:05.523 Number of Firmware Slots: N/A 00:22:05.523 Firmware Slot 1 Read-Only: N/A 00:22:05.523 Firmware Activation Without Reset: N/A 00:22:05.523 Multiple Update Detection Support: N/A 00:22:05.523 Firmware Update Granularity: No Information Provided 00:22:05.523 Per-Namespace SMART Log: No 00:22:05.523 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.523 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:05.523 Command Effects Log Page: Not Supported 00:22:05.523 Get Log Page Extended Data: Supported 00:22:05.523 Telemetry Log Pages: Not Supported 00:22:05.523 Persistent Event Log Pages: Not Supported 00:22:05.523 Supported Log Pages Log Page: May Support 00:22:05.523 Commands Supported & Effects Log Page: Not Supported 00:22:05.523 Feature Identifiers & Effects Log Page:May Support 00:22:05.523 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.523 Data Area 4 for Telemetry Log: Not Supported 00:22:05.523 Error Log Page Entries Supported: 128 00:22:05.523 Keep Alive: Not Supported 00:22:05.523 00:22:05.523 NVM Command Set Attributes 00:22:05.523 ========================== 00:22:05.523 Submission Queue Entry Size 00:22:05.523 Max: 1 00:22:05.523 Min: 1 00:22:05.523 Completion Queue Entry Size 00:22:05.523 Max: 1 00:22:05.523 Min: 1 00:22:05.523 Number of Namespaces: 0 00:22:05.523 Compare Command: Not Supported 00:22:05.523 Write Uncorrectable Command: Not Supported 00:22:05.523 Dataset Management Command: Not Supported 00:22:05.523 Write Zeroes Command: Not Supported 00:22:05.523 Set Features Save Field: Not Supported 00:22:05.523 Reservations: Not Supported 00:22:05.523 Timestamp: Not Supported 00:22:05.523 Copy: Not Supported 00:22:05.523 Volatile Write Cache: Not Present 00:22:05.523 Atomic Write Unit (Normal): 1 00:22:05.523 Atomic Write Unit (PFail): 1 00:22:05.523 Atomic Compare & Write Unit: 1 00:22:05.523 Fused Compare & Write: Supported 00:22:05.523 Scatter-Gather List 00:22:05.523 SGL Command Set: Supported 00:22:05.523 SGL Keyed: Supported 00:22:05.523 SGL Bit Bucket Descriptor: Not Supported 00:22:05.523 SGL Metadata Pointer: Not Supported 00:22:05.523 Oversized SGL: Not Supported 00:22:05.523 SGL Metadata Address: Not Supported 00:22:05.523 SGL Offset: Supported 00:22:05.523 Transport SGL Data Block: Not Supported 00:22:05.523 Replay Protected Memory Block: Not Supported 00:22:05.523 00:22:05.523 Firmware Slot Information 00:22:05.523 ========================= 00:22:05.523 Active slot: 0 00:22:05.523 00:22:05.523 00:22:05.523 Error Log 00:22:05.523 ========= 00:22:05.523 00:22:05.523 Active Namespaces 00:22:05.523 ================= 00:22:05.523 Discovery Log Page 00:22:05.523 ================== 00:22:05.523 Generation Counter: 2 00:22:05.523 Number of Records: 2 00:22:05.523 Record Format: 0 00:22:05.523 00:22:05.523 Discovery Log Entry 0 00:22:05.523 ---------------------- 00:22:05.523 Transport Type: 3 (TCP) 00:22:05.523 Address Family: 1 (IPv4) 00:22:05.523 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:05.523 Entry Flags: 00:22:05.523 Duplicate Returned Information: 1 00:22:05.523 Explicit Persistent Connection Support for Discovery: 1 00:22:05.523 Transport Requirements: 00:22:05.523 Secure Channel: Not Required 00:22:05.523 Port ID: 0 (0x0000) 00:22:05.523 Controller ID: 65535 (0xffff) 00:22:05.523 Admin Max SQ Size: 128 00:22:05.523 Transport Service Identifier: 4420 00:22:05.523 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:05.523 Transport Address: 10.0.0.2 00:22:05.523 Discovery Log Entry 1 00:22:05.523 ---------------------- 00:22:05.523 Transport Type: 3 (TCP) 00:22:05.523 Address Family: 1 (IPv4) 00:22:05.523 Subsystem Type: 2 (NVM Subsystem) 00:22:05.523 Entry Flags: 00:22:05.523 Duplicate Returned Information: 0 00:22:05.523 Explicit Persistent Connection Support for Discovery: 0 00:22:05.523 Transport Requirements: 00:22:05.523 Secure Channel: Not Required 00:22:05.523 Port ID: 0 (0x0000) 00:22:05.523 Controller ID: 65535 (0xffff) 00:22:05.523 Admin Max SQ Size: 128 00:22:05.523 Transport Service Identifier: 4420 00:22:05.523 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:05.523 Transport Address: 10.0.0.2 [2024-12-09 15:15:07.104323] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:05.523 [2024-12-09 15:15:07.104332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089100) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.523 [2024-12-09 15:15:07.104343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089280) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.523 [2024-12-09 15:15:07.104352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089400) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.523 [2024-12-09 15:15:07.104360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.523 [2024-12-09 15:15:07.104373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.523 [2024-12-09 15:15:07.104457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.523 [2024-12-09 15:15:07.104572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104589] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:05.523 [2024-12-09 15:15:07.104593] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:05.523 [2024-12-09 15:15:07.104601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.523 [2024-12-09 15:15:07.104686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.523 [2024-12-09 15:15:07.104788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.523 [2024-12-09 15:15:07.104897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.523 [2024-12-09 15:15:07.104903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.523 [2024-12-09 15:15:07.104907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.523 [2024-12-09 15:15:07.104920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.523 [2024-12-09 15:15:07.104927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.523 [2024-12-09 15:15:07.104933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.523 [2024-12-09 15:15:07.104942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.105917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.105923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.105925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.105937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.105944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.105950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.105959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.106952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.106958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.106961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.106973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.106980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.106986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.106997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.107054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.107061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.107064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.107076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.524 [2024-12-09 15:15:07.107089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.524 [2024-12-09 15:15:07.107098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.524 [2024-12-09 15:15:07.107157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.524 [2024-12-09 15:15:07.107163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.524 [2024-12-09 15:15:07.107166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.524 [2024-12-09 15:15:07.107178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.524 [2024-12-09 15:15:07.107185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.107911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.107921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.107977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.107984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.107987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.107990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.107998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.108011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.108021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.108074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.108080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.108083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.108094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.108108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.108118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.108174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.108180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.108183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.108195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.108202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1027690) 00:22:05.525 [2024-12-09 15:15:07.108208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.112223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1089580, cid 3, qid 0 00:22:05.525 [2024-12-09 15:15:07.112233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.112238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.112243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.112247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1089580) on tqpair=0x1027690 00:22:05.525 [2024-12-09 15:15:07.112254] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:05.525 00:22:05.525 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:05.525 [2024-12-09 15:15:07.148544] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:05.525 [2024-12-09 15:15:07.148587] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509860 ] 00:22:05.525 [2024-12-09 15:15:07.189425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:05.525 [2024-12-09 15:15:07.189463] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:05.525 [2024-12-09 15:15:07.189467] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:05.525 [2024-12-09 15:15:07.189482] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:05.525 [2024-12-09 15:15:07.189489] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:05.525 [2024-12-09 15:15:07.189836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:05.525 [2024-12-09 15:15:07.189864] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a59690 0 00:22:05.525 [2024-12-09 15:15:07.200228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:05.525 [2024-12-09 15:15:07.200239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:05.525 [2024-12-09 15:15:07.200248] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:05.525 [2024-12-09 15:15:07.200252] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:05.525 [2024-12-09 15:15:07.200279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.200284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.200289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.200298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:05.525 [2024-12-09 15:15:07.200314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.208227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.208236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.208239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.208251] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:05.525 [2024-12-09 15:15:07.208257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:05.525 [2024-12-09 15:15:07.208262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:05.525 [2024-12-09 15:15:07.208274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.208290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.208302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.208453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.208459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.208462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.208472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:05.525 [2024-12-09 15:15:07.208479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:05.525 [2024-12-09 15:15:07.208485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.208497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.208507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.208576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.208582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.208585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.208592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:05.525 [2024-12-09 15:15:07.208599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.208604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.208616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.208625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.208686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.208692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.208695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.208702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.208710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.208722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.208733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.208795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.208800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.208803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.208810] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:05.525 [2024-12-09 15:15:07.208814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.208821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.208928] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:05.525 [2024-12-09 15:15:07.208932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.208939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.208945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.525 [2024-12-09 15:15:07.208950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.525 [2024-12-09 15:15:07.208961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.525 [2024-12-09 15:15:07.209024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.525 [2024-12-09 15:15:07.209030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.525 [2024-12-09 15:15:07.209033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.209036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.525 [2024-12-09 15:15:07.209040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:05.525 [2024-12-09 15:15:07.209048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.525 [2024-12-09 15:15:07.209052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.209069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.526 [2024-12-09 15:15:07.209134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.209140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.209143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.209150] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:05.526 [2024-12-09 15:15:07.209154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:05.526 [2024-12-09 15:15:07.209173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.209199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.526 [2024-12-09 15:15:07.209291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.526 [2024-12-09 15:15:07.209297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.526 [2024-12-09 15:15:07.209300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209303] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=4096, cccid=0 00:22:05.526 [2024-12-09 15:15:07.209307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb100) on tqpair(0x1a59690): expected_datao=0, payload_size=4096 00:22:05.526 [2024-12-09 15:15:07.209311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209326] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209330] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.209369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.209372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.209384] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:05.526 [2024-12-09 15:15:07.209388] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:05.526 [2024-12-09 15:15:07.209392] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:05.526 [2024-12-09 15:15:07.209395] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:05.526 [2024-12-09 15:15:07.209399] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:05.526 [2024-12-09 15:15:07.209403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.526 [2024-12-09 15:15:07.209438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.526 [2024-12-09 15:15:07.209499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.209505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.209508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.209517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.526 [2024-12-09 15:15:07.209539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.526 [2024-12-09 15:15:07.209555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.526 [2024-12-09 15:15:07.209570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.526 [2024-12-09 15:15:07.209585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.209621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb100, cid 0, qid 0 00:22:05.526 [2024-12-09 15:15:07.209626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb280, cid 1, qid 0 00:22:05.526 [2024-12-09 15:15:07.209630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb400, cid 2, qid 0 00:22:05.526 [2024-12-09 15:15:07.209634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.526 [2024-12-09 15:15:07.209638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.209735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.209741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.209744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.209751] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:05.526 [2024-12-09 15:15:07.209756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.526 [2024-12-09 15:15:07.209800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.209868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.209873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.209876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.209929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.209945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.209949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.209954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.209963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.210038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.526 [2024-12-09 15:15:07.210044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.526 [2024-12-09 15:15:07.210047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.210050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=4096, cccid=4 00:22:05.526 [2024-12-09 15:15:07.210053] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb700) on tqpair(0x1a59690): expected_datao=0, payload_size=4096 00:22:05.526 [2024-12-09 15:15:07.210057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.210068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.210071] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.253237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.253240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.253253] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:05.526 [2024-12-09 15:15:07.253263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.253272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.253279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.253289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.253304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.253409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.526 [2024-12-09 15:15:07.253415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.526 [2024-12-09 15:15:07.253418] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=4096, cccid=4 00:22:05.526 [2024-12-09 15:15:07.253425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb700) on tqpair(0x1a59690): expected_datao=0, payload_size=4096 00:22:05.526 [2024-12-09 15:15:07.253429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253438] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.253474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.253477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.253491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.253500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.253506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.253515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.253525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.253604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.526 [2024-12-09 15:15:07.253610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.526 [2024-12-09 15:15:07.253613] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253616] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=4096, cccid=4 00:22:05.526 [2024-12-09 15:15:07.253619] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb700) on tqpair(0x1a59690): expected_datao=0, payload_size=4096 00:22:05.526 [2024-12-09 15:15:07.253623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.253638] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294418] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:05.526 [2024-12-09 15:15:07.294423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:05.526 [2024-12-09 15:15:07.294427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:05.526 [2024-12-09 15:15:07.294440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.294456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.526 [2024-12-09 15:15:07.294482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.526 [2024-12-09 15:15:07.294486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb880, cid 5, qid 0 00:22:05.526 [2024-12-09 15:15:07.294558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb880) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.294614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb880, cid 5, qid 0 00:22:05.526 [2024-12-09 15:15:07.294679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb880) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.294720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb880, cid 5, qid 0 00:22:05.526 [2024-12-09 15:15:07.294782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb880) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.526 [2024-12-09 15:15:07.294821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb880, cid 5, qid 0 00:22:05.526 [2024-12-09 15:15:07.294883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.526 [2024-12-09 15:15:07.294889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.526 [2024-12-09 15:15:07.294892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb880) on tqpair=0x1a59690 00:22:05.526 [2024-12-09 15:15:07.294909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.526 [2024-12-09 15:15:07.294913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a59690) 00:22:05.526 [2024-12-09 15:15:07.294919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.294925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.294929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.294934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.294940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.294943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.294949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.294955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.294959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.294964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.294974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb880, cid 5, qid 0 00:22:05.527 [2024-12-09 15:15:07.294979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb700, cid 4, qid 0 00:22:05.527 [2024-12-09 15:15:07.294983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abba00, cid 6, qid 0 00:22:05.527 [2024-12-09 15:15:07.294988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbb80, cid 7, qid 0 00:22:05.527 [2024-12-09 15:15:07.295123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.527 [2024-12-09 15:15:07.295129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.527 [2024-12-09 15:15:07.295132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295137] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=8192, cccid=5 00:22:05.527 [2024-12-09 15:15:07.295141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb880) on tqpair(0x1a59690): expected_datao=0, payload_size=8192 00:22:05.527 [2024-12-09 15:15:07.295145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.527 [2024-12-09 15:15:07.295173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.527 [2024-12-09 15:15:07.295177] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295180] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=512, cccid=4 00:22:05.527 [2024-12-09 15:15:07.295183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abb700) on tqpair(0x1a59690): expected_datao=0, payload_size=512 00:22:05.527 [2024-12-09 15:15:07.295187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295193] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295196] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.527 [2024-12-09 15:15:07.295206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.527 [2024-12-09 15:15:07.295209] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295212] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=512, cccid=6 00:22:05.527 [2024-12-09 15:15:07.295215] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abba00) on tqpair(0x1a59690): expected_datao=0, payload_size=512 00:22:05.527 [2024-12-09 15:15:07.295224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295229] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295233] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.527 [2024-12-09 15:15:07.295242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.527 [2024-12-09 15:15:07.295245] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a59690): datao=0, datal=4096, cccid=7 00:22:05.527 [2024-12-09 15:15:07.295252] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abbb80) on tqpair(0x1a59690): expected_datao=0, payload_size=4096 00:22:05.527 [2024-12-09 15:15:07.295256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295265] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb880) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb700) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abba00) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abbb80) on tqpair=0x1a59690 00:22:05.527 ===================================================== 00:22:05.527 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.527 ===================================================== 00:22:05.527 Controller Capabilities/Features 00:22:05.527 ================================ 00:22:05.527 Vendor ID: 8086 00:22:05.527 Subsystem Vendor ID: 8086 00:22:05.527 Serial Number: SPDK00000000000001 00:22:05.527 Model Number: SPDK bdev Controller 00:22:05.527 Firmware Version: 25.01 00:22:05.527 Recommended Arb Burst: 6 00:22:05.527 IEEE OUI Identifier: e4 d2 5c 00:22:05.527 Multi-path I/O 00:22:05.527 May have multiple subsystem ports: Yes 00:22:05.527 May have multiple controllers: Yes 00:22:05.527 Associated with SR-IOV VF: No 00:22:05.527 Max Data Transfer Size: 131072 00:22:05.527 Max Number of Namespaces: 32 00:22:05.527 Max Number of I/O Queues: 127 00:22:05.527 NVMe Specification Version (VS): 1.3 00:22:05.527 NVMe Specification Version (Identify): 1.3 00:22:05.527 Maximum Queue Entries: 128 00:22:05.527 Contiguous Queues Required: Yes 00:22:05.527 Arbitration Mechanisms Supported 00:22:05.527 Weighted Round Robin: Not Supported 00:22:05.527 Vendor Specific: Not Supported 00:22:05.527 Reset Timeout: 15000 ms 00:22:05.527 Doorbell Stride: 4 bytes 00:22:05.527 NVM Subsystem Reset: Not Supported 00:22:05.527 Command Sets Supported 00:22:05.527 NVM Command Set: Supported 00:22:05.527 Boot Partition: Not Supported 00:22:05.527 Memory Page Size Minimum: 4096 bytes 00:22:05.527 Memory Page Size Maximum: 4096 bytes 00:22:05.527 Persistent Memory Region: Not Supported 00:22:05.527 Optional Asynchronous Events Supported 00:22:05.527 Namespace Attribute Notices: Supported 00:22:05.527 Firmware Activation Notices: Not Supported 00:22:05.527 ANA Change Notices: Not Supported 00:22:05.527 PLE Aggregate Log Change Notices: Not Supported 00:22:05.527 LBA Status Info Alert Notices: Not Supported 00:22:05.527 EGE Aggregate Log Change Notices: Not Supported 00:22:05.527 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.527 Zone Descriptor Change Notices: Not Supported 00:22:05.527 Discovery Log Change Notices: Not Supported 00:22:05.527 Controller Attributes 00:22:05.527 128-bit Host Identifier: Supported 00:22:05.527 Non-Operational Permissive Mode: Not Supported 00:22:05.527 NVM Sets: Not Supported 00:22:05.527 Read Recovery Levels: Not Supported 00:22:05.527 Endurance Groups: Not Supported 00:22:05.527 Predictable Latency Mode: Not Supported 00:22:05.527 Traffic Based Keep ALive: Not Supported 00:22:05.527 Namespace Granularity: Not Supported 00:22:05.527 SQ Associations: Not Supported 00:22:05.527 UUID List: Not Supported 00:22:05.527 Multi-Domain Subsystem: Not Supported 00:22:05.527 Fixed Capacity Management: Not Supported 00:22:05.527 Variable Capacity Management: Not Supported 00:22:05.527 Delete Endurance Group: Not Supported 00:22:05.527 Delete NVM Set: Not Supported 00:22:05.527 Extended LBA Formats Supported: Not Supported 00:22:05.527 Flexible Data Placement Supported: Not Supported 00:22:05.527 00:22:05.527 Controller Memory Buffer Support 00:22:05.527 ================================ 00:22:05.527 Supported: No 00:22:05.527 00:22:05.527 Persistent Memory Region Support 00:22:05.527 ================================ 00:22:05.527 Supported: No 00:22:05.527 00:22:05.527 Admin Command Set Attributes 00:22:05.527 ============================ 00:22:05.527 Security Send/Receive: Not Supported 00:22:05.527 Format NVM: Not Supported 00:22:05.527 Firmware Activate/Download: Not Supported 00:22:05.527 Namespace Management: Not Supported 00:22:05.527 Device Self-Test: Not Supported 00:22:05.527 Directives: Not Supported 00:22:05.527 NVMe-MI: Not Supported 00:22:05.527 Virtualization Management: Not Supported 00:22:05.527 Doorbell Buffer Config: Not Supported 00:22:05.527 Get LBA Status Capability: Not Supported 00:22:05.527 Command & Feature Lockdown Capability: Not Supported 00:22:05.527 Abort Command Limit: 4 00:22:05.527 Async Event Request Limit: 4 00:22:05.527 Number of Firmware Slots: N/A 00:22:05.527 Firmware Slot 1 Read-Only: N/A 00:22:05.527 Firmware Activation Without Reset: N/A 00:22:05.527 Multiple Update Detection Support: N/A 00:22:05.527 Firmware Update Granularity: No Information Provided 00:22:05.527 Per-Namespace SMART Log: No 00:22:05.527 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.527 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:05.527 Command Effects Log Page: Supported 00:22:05.527 Get Log Page Extended Data: Supported 00:22:05.527 Telemetry Log Pages: Not Supported 00:22:05.527 Persistent Event Log Pages: Not Supported 00:22:05.527 Supported Log Pages Log Page: May Support 00:22:05.527 Commands Supported & Effects Log Page: Not Supported 00:22:05.527 Feature Identifiers & Effects Log Page:May Support 00:22:05.527 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.527 Data Area 4 for Telemetry Log: Not Supported 00:22:05.527 Error Log Page Entries Supported: 128 00:22:05.527 Keep Alive: Supported 00:22:05.527 Keep Alive Granularity: 10000 ms 00:22:05.527 00:22:05.527 NVM Command Set Attributes 00:22:05.527 ========================== 00:22:05.527 Submission Queue Entry Size 00:22:05.527 Max: 64 00:22:05.527 Min: 64 00:22:05.527 Completion Queue Entry Size 00:22:05.527 Max: 16 00:22:05.527 Min: 16 00:22:05.527 Number of Namespaces: 32 00:22:05.527 Compare Command: Supported 00:22:05.527 Write Uncorrectable Command: Not Supported 00:22:05.527 Dataset Management Command: Supported 00:22:05.527 Write Zeroes Command: Supported 00:22:05.527 Set Features Save Field: Not Supported 00:22:05.527 Reservations: Supported 00:22:05.527 Timestamp: Not Supported 00:22:05.527 Copy: Supported 00:22:05.527 Volatile Write Cache: Present 00:22:05.527 Atomic Write Unit (Normal): 1 00:22:05.527 Atomic Write Unit (PFail): 1 00:22:05.527 Atomic Compare & Write Unit: 1 00:22:05.527 Fused Compare & Write: Supported 00:22:05.527 Scatter-Gather List 00:22:05.527 SGL Command Set: Supported 00:22:05.527 SGL Keyed: Supported 00:22:05.527 SGL Bit Bucket Descriptor: Not Supported 00:22:05.527 SGL Metadata Pointer: Not Supported 00:22:05.527 Oversized SGL: Not Supported 00:22:05.527 SGL Metadata Address: Not Supported 00:22:05.527 SGL Offset: Supported 00:22:05.527 Transport SGL Data Block: Not Supported 00:22:05.527 Replay Protected Memory Block: Not Supported 00:22:05.527 00:22:05.527 Firmware Slot Information 00:22:05.527 ========================= 00:22:05.527 Active slot: 1 00:22:05.527 Slot 1 Firmware Revision: 25.01 00:22:05.527 00:22:05.527 00:22:05.527 Commands Supported and Effects 00:22:05.527 ============================== 00:22:05.527 Admin Commands 00:22:05.527 -------------- 00:22:05.527 Get Log Page (02h): Supported 00:22:05.527 Identify (06h): Supported 00:22:05.527 Abort (08h): Supported 00:22:05.527 Set Features (09h): Supported 00:22:05.527 Get Features (0Ah): Supported 00:22:05.527 Asynchronous Event Request (0Ch): Supported 00:22:05.527 Keep Alive (18h): Supported 00:22:05.527 I/O Commands 00:22:05.527 ------------ 00:22:05.527 Flush (00h): Supported LBA-Change 00:22:05.527 Write (01h): Supported LBA-Change 00:22:05.527 Read (02h): Supported 00:22:05.527 Compare (05h): Supported 00:22:05.527 Write Zeroes (08h): Supported LBA-Change 00:22:05.527 Dataset Management (09h): Supported LBA-Change 00:22:05.527 Copy (19h): Supported LBA-Change 00:22:05.527 00:22:05.527 Error Log 00:22:05.527 ========= 00:22:05.527 00:22:05.527 Arbitration 00:22:05.527 =========== 00:22:05.527 Arbitration Burst: 1 00:22:05.527 00:22:05.527 Power Management 00:22:05.527 ================ 00:22:05.527 Number of Power States: 1 00:22:05.527 Current Power State: Power State #0 00:22:05.527 Power State #0: 00:22:05.527 Max Power: 0.00 W 00:22:05.527 Non-Operational State: Operational 00:22:05.527 Entry Latency: Not Reported 00:22:05.527 Exit Latency: Not Reported 00:22:05.527 Relative Read Throughput: 0 00:22:05.527 Relative Read Latency: 0 00:22:05.527 Relative Write Throughput: 0 00:22:05.527 Relative Write Latency: 0 00:22:05.527 Idle Power: Not Reported 00:22:05.527 Active Power: Not Reported 00:22:05.527 Non-Operational Permissive Mode: Not Supported 00:22:05.527 00:22:05.527 Health Information 00:22:05.527 ================== 00:22:05.527 Critical Warnings: 00:22:05.527 Available Spare Space: OK 00:22:05.527 Temperature: OK 00:22:05.527 Device Reliability: OK 00:22:05.527 Read Only: No 00:22:05.527 Volatile Memory Backup: OK 00:22:05.527 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:05.527 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:05.527 Available Spare: 0% 00:22:05.527 Available Spare Threshold: 0% 00:22:05.527 Life Percentage Used:[2024-12-09 15:15:07.295426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.295436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.295448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abbb80, cid 7, qid 0 00:22:05.527 [2024-12-09 15:15:07.295525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abbb80) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295568] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:05.527 [2024-12-09 15:15:07.295577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb100) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.527 [2024-12-09 15:15:07.295588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb280) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.527 [2024-12-09 15:15:07.295596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb400) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.527 [2024-12-09 15:15:07.295605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.527 [2024-12-09 15:15:07.295615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.295628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.295640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.527 [2024-12-09 15:15:07.295702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.527 [2024-12-09 15:15:07.295734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.527 [2024-12-09 15:15:07.295747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.527 [2024-12-09 15:15:07.295822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.527 [2024-12-09 15:15:07.295828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.527 [2024-12-09 15:15:07.295831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.527 [2024-12-09 15:15:07.295834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.527 [2024-12-09 15:15:07.295838] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:05.528 [2024-12-09 15:15:07.295842] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:05.528 [2024-12-09 15:15:07.295850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.295854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.295857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.295863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.295872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.295931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.295937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.295940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.295944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.295952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.295956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.295959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.295964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.295975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.296905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.296913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.296918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.296928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.296990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.296995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.296998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.297010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.297025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.297035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.297097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.297103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.297106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.297117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.297124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.297129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.297138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.297205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.297211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.297214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.301223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.301234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.301238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.301241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a59690) 00:22:05.528 [2024-12-09 15:15:07.301246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.528 [2024-12-09 15:15:07.301257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abb580, cid 3, qid 0 00:22:05.528 [2024-12-09 15:15:07.301414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.528 [2024-12-09 15:15:07.301419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.528 [2024-12-09 15:15:07.301422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.528 [2024-12-09 15:15:07.301425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abb580) on tqpair=0x1a59690 00:22:05.528 [2024-12-09 15:15:07.301432] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:05.787 0% 00:22:05.787 Data Units Read: 0 00:22:05.787 Data Units Written: 0 00:22:05.787 Host Read Commands: 0 00:22:05.787 Host Write Commands: 0 00:22:05.787 Controller Busy Time: 0 minutes 00:22:05.787 Power Cycles: 0 00:22:05.787 Power On Hours: 0 hours 00:22:05.787 Unsafe Shutdowns: 0 00:22:05.787 Unrecoverable Media Errors: 0 00:22:05.787 Lifetime Error Log Entries: 0 00:22:05.787 Warning Temperature Time: 0 minutes 00:22:05.787 Critical Temperature Time: 0 minutes 00:22:05.787 00:22:05.787 Number of Queues 00:22:05.787 ================ 00:22:05.787 Number of I/O Submission Queues: 127 00:22:05.787 Number of I/O Completion Queues: 127 00:22:05.787 00:22:05.787 Active Namespaces 00:22:05.787 ================= 00:22:05.788 Namespace ID:1 00:22:05.788 Error Recovery Timeout: Unlimited 00:22:05.788 Command Set Identifier: NVM (00h) 00:22:05.788 Deallocate: Supported 00:22:05.788 Deallocated/Unwritten Error: Not Supported 00:22:05.788 Deallocated Read Value: Unknown 00:22:05.788 Deallocate in Write Zeroes: Not Supported 00:22:05.788 Deallocated Guard Field: 0xFFFF 00:22:05.788 Flush: Supported 00:22:05.788 Reservation: Supported 00:22:05.788 Namespace Sharing Capabilities: Multiple Controllers 00:22:05.788 Size (in LBAs): 131072 (0GiB) 00:22:05.788 Capacity (in LBAs): 131072 (0GiB) 00:22:05.788 Utilization (in LBAs): 131072 (0GiB) 00:22:05.788 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:05.788 EUI64: ABCDEF0123456789 00:22:05.788 UUID: d5e8a9ac-5851-4a40-b592-bb77be5699b9 00:22:05.788 Thin Provisioning: Not Supported 00:22:05.788 Per-NS Atomic Units: Yes 00:22:05.788 Atomic Boundary Size (Normal): 0 00:22:05.788 Atomic Boundary Size (PFail): 0 00:22:05.788 Atomic Boundary Offset: 0 00:22:05.788 Maximum Single Source Range Length: 65535 00:22:05.788 Maximum Copy Length: 65535 00:22:05.788 Maximum Source Range Count: 1 00:22:05.788 NGUID/EUI64 Never Reused: No 00:22:05.788 Namespace Write Protected: No 00:22:05.788 Number of LBA Formats: 1 00:22:05.788 Current LBA Format: LBA Format #00 00:22:05.788 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:05.788 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.788 rmmod nvme_tcp 00:22:05.788 rmmod nvme_fabrics 00:22:05.788 rmmod nvme_keyring 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1509511 ']' 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1509511 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1509511 ']' 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1509511 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509511 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509511' 00:22:05.788 killing process with pid 1509511 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1509511 00:22:05.788 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1509511 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.047 15:15:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.952 00:22:07.952 real 0m9.367s 00:22:07.952 user 0m5.535s 00:22:07.952 sys 0m4.890s 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:07.952 ************************************ 00:22:07.952 END TEST nvmf_identify 00:22:07.952 ************************************ 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.952 15:15:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.211 ************************************ 00:22:08.211 START TEST nvmf_perf 00:22:08.211 ************************************ 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:08.211 * Looking for test storage... 00:22:08.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.211 --rc genhtml_branch_coverage=1 00:22:08.211 --rc genhtml_function_coverage=1 00:22:08.211 --rc genhtml_legend=1 00:22:08.211 --rc geninfo_all_blocks=1 00:22:08.211 --rc geninfo_unexecuted_blocks=1 00:22:08.211 00:22:08.211 ' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.211 --rc genhtml_branch_coverage=1 00:22:08.211 --rc genhtml_function_coverage=1 00:22:08.211 --rc genhtml_legend=1 00:22:08.211 --rc geninfo_all_blocks=1 00:22:08.211 --rc geninfo_unexecuted_blocks=1 00:22:08.211 00:22:08.211 ' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.211 --rc genhtml_branch_coverage=1 00:22:08.211 --rc genhtml_function_coverage=1 00:22:08.211 --rc genhtml_legend=1 00:22:08.211 --rc geninfo_all_blocks=1 00:22:08.211 --rc geninfo_unexecuted_blocks=1 00:22:08.211 00:22:08.211 ' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.211 --rc genhtml_branch_coverage=1 00:22:08.211 --rc genhtml_function_coverage=1 00:22:08.211 --rc genhtml_legend=1 00:22:08.211 --rc geninfo_all_blocks=1 00:22:08.211 --rc geninfo_unexecuted_blocks=1 00:22:08.211 00:22:08.211 ' 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.211 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.212 15:15:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:14.783 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:14.783 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:14.783 Found net devices under 0000:af:00.0: cvl_0_0 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:14.783 Found net devices under 0000:af:00.1: cvl_0_1 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.783 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:14.784 00:22:14.784 --- 10.0.0.2 ping statistics --- 00:22:14.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.784 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:22:14.784 00:22:14.784 --- 10.0.0.1 ping statistics --- 00:22:14.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.784 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1513463 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1513463 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1513463 ']' 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.784 15:15:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.784 [2024-12-09 15:15:15.878775] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:14.784 [2024-12-09 15:15:15.878823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.784 [2024-12-09 15:15:15.957932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.784 [2024-12-09 15:15:15.998882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.784 [2024-12-09 15:15:15.998916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.784 [2024-12-09 15:15:15.998922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.784 [2024-12-09 15:15:15.998928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.784 [2024-12-09 15:15:15.998933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.784 [2024-12-09 15:15:16.000315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.784 [2024-12-09 15:15:16.000426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.784 [2024-12-09 15:15:16.000531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.784 [2024-12-09 15:15:16.000533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:14.784 15:15:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:18.072 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:18.072 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:18.072 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.073 [2024-12-09 15:15:19.766527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.073 15:15:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.331 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:18.331 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.590 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:18.590 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:18.848 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.848 [2024-12-09 15:15:20.578756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.848 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:19.107 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:19.107 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:19.107 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:19.107 15:15:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:20.485 Initializing NVMe Controllers 00:22:20.485 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:20.485 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:20.485 Initialization complete. Launching workers. 00:22:20.485 ======================================================== 00:22:20.485 Latency(us) 00:22:20.485 Device Information : IOPS MiB/s Average min max 00:22:20.485 PCIE (0000:5e:00.0) NSID 1 from core 0: 98592.75 385.13 323.95 10.30 4582.88 00:22:20.485 ======================================================== 00:22:20.485 Total : 98592.75 385.13 323.95 10.30 4582.88 00:22:20.485 00:22:20.485 15:15:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:21.861 Initializing NVMe Controllers 00:22:21.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.861 Initialization complete. Launching workers. 00:22:21.861 ======================================================== 00:22:21.861 Latency(us) 00:22:21.861 Device Information : IOPS MiB/s Average min max 00:22:21.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.00 0.30 13114.40 106.84 45797.40 00:22:21.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15208.33 4101.39 47885.67 00:22:21.861 ======================================================== 00:22:21.861 Total : 144.00 0.56 14074.12 106.84 47885.67 00:22:21.861 00:22:21.861 15:15:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:23.238 Initializing NVMe Controllers 00:22:23.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:23.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:23.238 Initialization complete. Launching workers. 00:22:23.238 ======================================================== 00:22:23.238 Latency(us) 00:22:23.238 Device Information : IOPS MiB/s Average min max 00:22:23.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11125.11 43.46 2876.99 455.65 8703.61 00:22:23.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3809.67 14.88 8455.63 4627.34 16822.48 00:22:23.238 ======================================================== 00:22:23.238 Total : 14934.77 58.34 4300.03 455.65 16822.48 00:22:23.238 00:22:23.238 15:15:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:23.238 15:15:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:23.238 15:15:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:25.783 Initializing NVMe Controllers 00:22:25.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.783 Controller IO queue size 128, less than required. 00:22:25.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:25.783 Controller IO queue size 128, less than required. 00:22:25.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:25.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:25.783 Initialization complete. Launching workers. 00:22:25.783 ======================================================== 00:22:25.783 Latency(us) 00:22:25.783 Device Information : IOPS MiB/s Average min max 00:22:25.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1795.59 448.90 72721.86 49726.18 128597.95 00:22:25.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 562.87 140.72 230706.83 94820.35 329322.25 00:22:25.784 ======================================================== 00:22:25.784 Total : 2358.47 589.62 110426.62 49726.18 329322.25 00:22:25.784 00:22:25.784 15:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:25.784 No valid NVMe controllers or AIO or URING devices found 00:22:25.784 Initializing NVMe Controllers 00:22:25.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.784 Controller IO queue size 128, less than required. 00:22:25.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:25.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:25.784 Controller IO queue size 128, less than required. 00:22:25.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:25.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:25.784 WARNING: Some requested NVMe devices were skipped 00:22:25.784 15:15:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:28.319 Initializing NVMe Controllers 00:22:28.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.319 Controller IO queue size 128, less than required. 00:22:28.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.319 Controller IO queue size 128, less than required. 00:22:28.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:28.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.319 Initialization complete. Launching workers. 00:22:28.320 00:22:28.320 ==================== 00:22:28.320 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:28.320 TCP transport: 00:22:28.320 polls: 16543 00:22:28.320 idle_polls: 12660 00:22:28.320 sock_completions: 3883 00:22:28.320 nvme_completions: 6271 00:22:28.320 submitted_requests: 9362 00:22:28.320 queued_requests: 1 00:22:28.320 00:22:28.320 ==================== 00:22:28.320 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:28.320 TCP transport: 00:22:28.320 polls: 15519 00:22:28.320 idle_polls: 12006 00:22:28.320 sock_completions: 3513 00:22:28.320 nvme_completions: 6709 00:22:28.320 submitted_requests: 10192 00:22:28.320 queued_requests: 1 00:22:28.320 ======================================================== 00:22:28.320 Latency(us) 00:22:28.320 Device Information : IOPS MiB/s Average min max 00:22:28.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1567.40 391.85 84430.58 52159.15 145929.32 00:22:28.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1676.89 419.22 76592.01 42037.63 122387.63 00:22:28.320 ======================================================== 00:22:28.320 Total : 3244.30 811.07 80379.02 42037.63 145929.32 00:22:28.320 00:22:28.320 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:28.320 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.579 rmmod nvme_tcp 00:22:28.579 rmmod nvme_fabrics 00:22:28.579 rmmod nvme_keyring 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1513463 ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1513463 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1513463 ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1513463 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513463 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513463' 00:22:28.579 killing process with pid 1513463 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1513463 00:22:28.579 15:15:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1513463 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.483 15:15:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.532 00:22:32.532 real 0m24.157s 00:22:32.532 user 1m2.849s 00:22:32.532 sys 0m8.296s 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.532 ************************************ 00:22:32.532 END TEST nvmf_perf 00:22:32.532 ************************************ 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.532 ************************************ 00:22:32.532 START TEST nvmf_fio_host 00:22:32.532 ************************************ 00:22:32.532 15:15:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:32.532 * Looking for test storage... 00:22:32.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:32.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.532 --rc genhtml_branch_coverage=1 00:22:32.532 --rc genhtml_function_coverage=1 00:22:32.532 --rc genhtml_legend=1 00:22:32.532 --rc geninfo_all_blocks=1 00:22:32.532 --rc geninfo_unexecuted_blocks=1 00:22:32.532 00:22:32.532 ' 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:32.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.532 --rc genhtml_branch_coverage=1 00:22:32.532 --rc genhtml_function_coverage=1 00:22:32.532 --rc genhtml_legend=1 00:22:32.532 --rc geninfo_all_blocks=1 00:22:32.532 --rc geninfo_unexecuted_blocks=1 00:22:32.532 00:22:32.532 ' 00:22:32.532 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:32.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.533 --rc genhtml_branch_coverage=1 00:22:32.533 --rc genhtml_function_coverage=1 00:22:32.533 --rc genhtml_legend=1 00:22:32.533 --rc geninfo_all_blocks=1 00:22:32.533 --rc geninfo_unexecuted_blocks=1 00:22:32.533 00:22:32.533 ' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:32.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.533 --rc genhtml_branch_coverage=1 00:22:32.533 --rc genhtml_function_coverage=1 00:22:32.533 --rc genhtml_legend=1 00:22:32.533 --rc geninfo_all_blocks=1 00:22:32.533 --rc geninfo_unexecuted_blocks=1 00:22:32.533 00:22:32.533 ' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.533 15:15:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:39.103 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:39.103 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:39.103 Found net devices under 0000:af:00.0: cvl_0_0 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.103 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:39.103 Found net devices under 0000:af:00.1: cvl_0_1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.104 15:15:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:22:39.104 00:22:39.104 --- 10.0.0.2 ping statistics --- 00:22:39.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.104 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:39.104 00:22:39.104 --- 10.0.0.1 ping statistics --- 00:22:39.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.104 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1519526 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1519526 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1519526 ']' 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.104 [2024-12-09 15:15:40.274618] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:39.104 [2024-12-09 15:15:40.274662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.104 [2024-12-09 15:15:40.351655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.104 [2024-12-09 15:15:40.392270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.104 [2024-12-09 15:15:40.392306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.104 [2024-12-09 15:15:40.392313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.104 [2024-12-09 15:15:40.392319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.104 [2024-12-09 15:15:40.392324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.104 [2024-12-09 15:15:40.393767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.104 [2024-12-09 15:15:40.393877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.104 [2024-12-09 15:15:40.393984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.104 [2024-12-09 15:15:40.393985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:39.104 [2024-12-09 15:15:40.663013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.104 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:39.362 Malloc1 00:22:39.363 15:15:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.621 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.621 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.880 [2024-12-09 15:15:41.552780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.880 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:40.139 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:40.140 15:15:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.398 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:40.399 fio-3.35 00:22:40.399 Starting 1 thread 00:22:42.938 00:22:42.938 test: (groupid=0, jobs=1): err= 0: pid=1519907: Mon Dec 9 15:15:44 2024 00:22:42.938 read: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(93.6MiB/2005msec) 00:22:42.938 slat (nsec): min=1524, max=247843, avg=1718.74, stdev=2226.74 00:22:42.938 clat (usec): min=3184, max=10319, avg=5932.20, stdev=466.32 00:22:42.938 lat (usec): min=3217, max=10321, avg=5933.92, stdev=466.29 00:22:42.938 clat percentiles (usec): 00:22:42.938 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:22:42.938 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:42.938 | 70.00th=[ 6194], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:22:42.938 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 8979], 00:22:42.938 | 99.99th=[ 9765] 00:22:42.938 bw ( KiB/s): min=47104, max=48280, per=99.93%, avg=47788.00, stdev=541.82, samples=4 00:22:42.938 iops : min=11776, max=12070, avg=11947.00, stdev=135.45, samples=4 00:22:42.938 write: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.2MiB/2005msec); 0 zone resets 00:22:42.938 slat (nsec): min=1568, max=239508, avg=1800.85, stdev=1720.93 00:22:42.938 clat (usec): min=2438, max=8966, avg=4775.75, stdev=385.40 00:22:42.938 lat (usec): min=2454, max=8968, avg=4777.55, stdev=385.48 00:22:42.938 clat percentiles (usec): 00:22:42.938 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:22:42.938 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:22:42.938 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:22:42.938 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7308], 99.95th=[ 8455], 00:22:42.938 | 99.99th=[ 8848] 00:22:42.938 bw ( KiB/s): min=47216, max=47968, per=100.00%, avg=47612.00, stdev=321.16, samples=4 00:22:42.938 iops : min=11804, max=11992, avg=11903.00, stdev=80.29, samples=4 00:22:42.938 lat (msec) : 4=0.77%, 10=99.23%, 20=0.01% 00:22:42.938 cpu : usr=74.05%, sys=25.00%, ctx=91, majf=0, minf=2 00:22:42.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:42.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:42.938 issued rwts: total=23971,23856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:42.938 00:22:42.938 Run status group 0 (all jobs): 00:22:42.938 READ: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=93.6MiB (98.2MB), run=2005-2005msec 00:22:42.938 WRITE: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.2MiB (97.7MB), run=2005-2005msec 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.938 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:42.939 15:15:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:43.201 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:43.201 fio-3.35 00:22:43.201 Starting 1 thread 00:22:45.737 00:22:45.737 test: (groupid=0, jobs=1): err= 0: pid=1520468: Mon Dec 9 15:15:47 2024 00:22:45.737 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2006msec) 00:22:45.737 slat (nsec): min=2496, max=86986, avg=2837.49, stdev=1223.92 00:22:45.737 clat (usec): min=1781, max=12548, avg=6662.01, stdev=1578.80 00:22:45.737 lat (usec): min=1784, max=12551, avg=6664.85, stdev=1578.87 00:22:45.737 clat percentiles (usec): 00:22:45.737 | 1.00th=[ 3556], 5.00th=[ 4146], 10.00th=[ 4555], 20.00th=[ 5211], 00:22:45.737 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:22:45.737 | 70.00th=[ 7504], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9241], 00:22:45.737 | 99.00th=[10552], 99.50th=[11207], 99.90th=[12125], 99.95th=[12125], 00:22:45.737 | 99.99th=[12387] 00:22:45.737 bw ( KiB/s): min=87072, max=93184, per=51.02%, avg=90089.50, stdev=3466.69, samples=4 00:22:45.737 iops : min= 5442, max= 5824, avg=5630.50, stdev=216.56, samples=4 00:22:45.737 write: IOPS=6455, BW=101MiB/s (106MB/s)(184MiB/1826msec); 0 zone resets 00:22:45.737 slat (usec): min=29, max=241, avg=31.59, stdev= 5.85 00:22:45.737 clat (usec): min=3462, max=13407, avg=8470.67, stdev=1483.60 00:22:45.737 lat (usec): min=3491, max=13437, avg=8502.25, stdev=1484.40 00:22:45.737 clat percentiles (usec): 00:22:45.737 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7177], 00:22:45.737 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8717], 00:22:45.737 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:22:45.737 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13042], 99.95th=[13173], 00:22:45.737 | 99.99th=[13435] 00:22:45.737 bw ( KiB/s): min=90368, max=97085, per=90.87%, avg=93847.25, stdev=3208.08, samples=4 00:22:45.737 iops : min= 5648, max= 6067, avg=5865.25, stdev=200.23, samples=4 00:22:45.737 lat (msec) : 2=0.01%, 4=2.31%, 10=90.67%, 20=7.02% 00:22:45.737 cpu : usr=87.08%, sys=12.22%, ctx=38, majf=0, minf=2 00:22:45.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:45.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:45.737 issued rwts: total=22139,11787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:45.737 00:22:45.737 Run status group 0 (all jobs): 00:22:45.737 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2006-2006msec 00:22:45.737 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=184MiB (193MB), run=1826-1826msec 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.737 rmmod nvme_tcp 00:22:45.737 rmmod nvme_fabrics 00:22:45.737 rmmod nvme_keyring 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1519526 ']' 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1519526 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1519526 ']' 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1519526 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.737 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1519526 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1519526' 00:22:45.997 killing process with pid 1519526 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1519526 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1519526 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.997 15:15:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.533 00:22:48.533 real 0m15.829s 00:22:48.533 user 0m46.563s 00:22:48.533 sys 0m6.422s 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.533 ************************************ 00:22:48.533 END TEST nvmf_fio_host 00:22:48.533 ************************************ 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.533 ************************************ 00:22:48.533 START TEST nvmf_failover 00:22:48.533 ************************************ 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:48.533 * Looking for test storage... 00:22:48.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.533 15:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.533 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.534 --rc genhtml_branch_coverage=1 00:22:48.534 --rc genhtml_function_coverage=1 00:22:48.534 --rc genhtml_legend=1 00:22:48.534 --rc geninfo_all_blocks=1 00:22:48.534 --rc geninfo_unexecuted_blocks=1 00:22:48.534 00:22:48.534 ' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.534 --rc genhtml_branch_coverage=1 00:22:48.534 --rc genhtml_function_coverage=1 00:22:48.534 --rc genhtml_legend=1 00:22:48.534 --rc geninfo_all_blocks=1 00:22:48.534 --rc geninfo_unexecuted_blocks=1 00:22:48.534 00:22:48.534 ' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.534 --rc genhtml_branch_coverage=1 00:22:48.534 --rc genhtml_function_coverage=1 00:22:48.534 --rc genhtml_legend=1 00:22:48.534 --rc geninfo_all_blocks=1 00:22:48.534 --rc geninfo_unexecuted_blocks=1 00:22:48.534 00:22:48.534 ' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.534 --rc genhtml_branch_coverage=1 00:22:48.534 --rc genhtml_function_coverage=1 00:22:48.534 --rc genhtml_legend=1 00:22:48.534 --rc geninfo_all_blocks=1 00:22:48.534 --rc geninfo_unexecuted_blocks=1 00:22:48.534 00:22:48.534 ' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.534 15:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:55.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:55.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:55.104 Found net devices under 0000:af:00.0: cvl_0_0 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.104 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:55.105 Found net devices under 0000:af:00.1: cvl_0_1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:22:55.105 00:22:55.105 --- 10.0.0.2 ping statistics --- 00:22:55.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.105 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:55.105 00:22:55.105 --- 10.0.0.1 ping statistics --- 00:22:55.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.105 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.105 15:15:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1524409 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1524409 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1524409 ']' 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 [2024-12-09 15:15:56.065908] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:22:55.105 [2024-12-09 15:15:56.065951] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.105 [2024-12-09 15:15:56.143766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.105 [2024-12-09 15:15:56.183255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.105 [2024-12-09 15:15:56.183290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.105 [2024-12-09 15:15:56.183297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.105 [2024-12-09 15:15:56.183303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.105 [2024-12-09 15:15:56.183308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.105 [2024-12-09 15:15:56.184675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.105 [2024-12-09 15:15:56.184783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.105 [2024-12-09 15:15:56.184784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.105 [2024-12-09 15:15:56.480621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.105 Malloc0 00:22:55.105 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.363 15:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.363 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.620 [2024-12-09 15:15:57.321401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.620 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:55.877 [2024-12-09 15:15:57.517931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.877 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:56.134 [2024-12-09 15:15:57.706533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1524664 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1524664 /var/tmp/bdevperf.sock 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1524664 ']' 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.134 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.391 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.391 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:56.391 15:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:56.647 NVMe0n1 00:22:56.903 15:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:57.159 00:22:57.159 15:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1524887 00:22:57.159 15:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.159 15:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:58.090 15:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.346 [2024-12-09 15:16:00.004032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.346 [2024-12-09 15:16:00.004157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 [2024-12-09 15:16:00.004325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b7760 is same with the state(6) to be set 00:22:58.347 15:16:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:01.617 15:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.873 00:23:01.873 15:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.129 15:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:05.408 15:16:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.408 [2024-12-09 15:16:06.882736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.408 15:16:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:06.339 15:16:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:06.596 15:16:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1524887 00:23:13.151 { 00:23:13.151 "results": [ 00:23:13.151 { 00:23:13.151 "job": "NVMe0n1", 00:23:13.151 "core_mask": "0x1", 00:23:13.151 "workload": "verify", 00:23:13.151 "status": "finished", 00:23:13.151 "verify_range": { 00:23:13.151 "start": 0, 00:23:13.151 "length": 16384 00:23:13.151 }, 00:23:13.151 "queue_depth": 128, 00:23:13.151 "io_size": 4096, 00:23:13.151 "runtime": 15.011623, 00:23:13.151 "iops": 11266.336757857562, 00:23:13.151 "mibps": 44.0091279603811, 00:23:13.151 "io_failed": 12509, 00:23:13.151 "io_timeout": 0, 00:23:13.151 "avg_latency_us": 10557.285347700188, 00:23:13.151 "min_latency_us": 427.1542857142857, 00:23:13.151 "max_latency_us": 21346.01142857143 00:23:13.151 } 00:23:13.151 ], 00:23:13.151 "core_count": 1 00:23:13.151 } 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1524664 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1524664 ']' 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1524664 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.151 15:16:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1524664 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1524664' 00:23:13.151 killing process with pid 1524664 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1524664 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1524664 00:23:13.151 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.151 [2024-12-09 15:15:57.765722] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:13.151 [2024-12-09 15:15:57.765772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524664 ] 00:23:13.151 [2024-12-09 15:15:57.841452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.151 [2024-12-09 15:15:57.881449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.151 Running I/O for 15 seconds... 00:23:13.151 11195.00 IOPS, 43.73 MiB/s [2024-12-09T14:16:14.946Z] [2024-12-09 15:16:00.005664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.151 [2024-12-09 15:16:00.005826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.151 [2024-12-09 15:16:00.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.151 [2024-12-09 15:16:00.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.005989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.005997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-12-09 15:16:00.006295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.152 [2024-12-09 15:16:00.006450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.152 [2024-12-09 15:16:00.006456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.006988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.006995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.007002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.007009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.007017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.007023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.007031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.007038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.153 [2024-12-09 15:16:00.007045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.153 [2024-12-09 15:16:00.007052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.154 [2024-12-09 15:16:00.007255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99448 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99472 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:23:13.154 [2024-12-09 15:16:00.007678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.154 [2024-12-09 15:16:00.007684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.154 [2024-12-09 15:16:00.007690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.154 [2024-12-09 15:16:00.007695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.007701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.007708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.007713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.007718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.007724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.007731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.018582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.018597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.018607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.018624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.018631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.018642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.018660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.018668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.018696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.018704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.018712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.155 [2024-12-09 15:16:00.018728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.155 [2024-12-09 15:16:00.018735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 00:23:13.155 [2024-12-09 15:16:00.018744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018792] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:13.155 [2024-12-09 15:16:00.018820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.155 [2024-12-09 15:16:00.018831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.155 [2024-12-09 15:16:00.018851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.155 [2024-12-09 15:16:00.018871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.155 [2024-12-09 15:16:00.018889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:00.018898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:13.155 [2024-12-09 15:16:00.018930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3c8d0 (9): Bad file descriptor 00:23:13.155 [2024-12-09 15:16:00.022698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:13.155 [2024-12-09 15:16:00.177734] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:13.155 10301.50 IOPS, 40.24 MiB/s [2024-12-09T14:16:14.950Z] 10692.67 IOPS, 41.77 MiB/s [2024-12-09T14:16:14.950Z] 10917.25 IOPS, 42.65 MiB/s [2024-12-09T14:16:14.950Z] [2024-12-09 15:16:03.662602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.155 [2024-12-09 15:16:03.662780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-12-09 15:16:03.662902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-12-09 15:16:03.662910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.662917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.662935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.662944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.662951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.662960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.662968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.662976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.662984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.662993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.662999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.156 [2024-12-09 15:16:03.663420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-12-09 15:16:03.663546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.156 [2024-12-09 15:16:03.663554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.663991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.663999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.157 [2024-12-09 15:16:03.664166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.157 [2024-12-09 15:16:03.664173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:03.664517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.158 [2024-12-09 15:16:03.664622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6b110 is same with the state(6) to be set 00:23:13.158 [2024-12-09 15:16:03.664642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.158 [2024-12-09 15:16:03.664648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.158 [2024-12-09 15:16:03.664655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:23:13.158 [2024-12-09 15:16:03.664664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664706] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:13.158 [2024-12-09 15:16:03.664728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.158 [2024-12-09 15:16:03.664735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.158 [2024-12-09 15:16:03.664750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.158 [2024-12-09 15:16:03.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.158 [2024-12-09 15:16:03.664787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:03.664796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:13.158 [2024-12-09 15:16:03.667622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:13.158 [2024-12-09 15:16:03.667653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3c8d0 (9): Bad file descriptor 00:23:13.158 [2024-12-09 15:16:03.697309] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:13.158 10928.60 IOPS, 42.69 MiB/s [2024-12-09T14:16:14.953Z] 11016.67 IOPS, 43.03 MiB/s [2024-12-09T14:16:14.953Z] 11101.71 IOPS, 43.37 MiB/s [2024-12-09T14:16:14.953Z] 11151.88 IOPS, 43.56 MiB/s [2024-12-09T14:16:14.953Z] 11205.33 IOPS, 43.77 MiB/s [2024-12-09T14:16:14.953Z] [2024-12-09 15:16:08.115104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:08.115148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:08.115164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:08.115172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.158 [2024-12-09 15:16:08.115181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.158 [2024-12-09 15:16:08.115188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.159 [2024-12-09 15:16:08.115536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.159 [2024-12-09 15:16:08.115811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.159 [2024-12-09 15:16:08.115818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.160 [2024-12-09 15:16:08.115832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.160 [2024-12-09 15:16:08.115847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.160 [2024-12-09 15:16:08.115861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.160 [2024-12-09 15:16:08.115875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.160 [2024-12-09 15:16:08.115891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.115986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.115993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.160 [2024-12-09 15:16:08.116343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.160 [2024-12-09 15:16:08.116350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.161 [2024-12-09 15:16:08.116944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.161 [2024-12-09 15:16:08.116950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.116959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.116965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.116972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.116979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.116987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.116994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.117008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.117022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.117036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.117051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.162 [2024-12-09 15:16:08.117065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97890 is same with the state(6) to be set 00:23:13.162 [2024-12-09 15:16:08.117081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.162 [2024-12-09 15:16:08.117086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.162 [2024-12-09 15:16:08.117091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109976 len:8 PRP1 0x0 PRP2 0x0 00:23:13.162 [2024-12-09 15:16:08.117099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117142] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:13.162 [2024-12-09 15:16:08.117170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.162 [2024-12-09 15:16:08.117178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.162 [2024-12-09 15:16:08.117192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.162 [2024-12-09 15:16:08.117206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.162 [2024-12-09 15:16:08.117224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.162 [2024-12-09 15:16:08.117230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:13.162 [2024-12-09 15:16:08.120025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:13.162 [2024-12-09 15:16:08.120056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3c8d0 (9): Bad file descriptor 00:23:13.162 [2024-12-09 15:16:08.189097] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:13.162 11148.90 IOPS, 43.55 MiB/s [2024-12-09T14:16:14.957Z] 11165.09 IOPS, 43.61 MiB/s [2024-12-09T14:16:14.957Z] 11188.17 IOPS, 43.70 MiB/s [2024-12-09T14:16:14.957Z] 11221.31 IOPS, 43.83 MiB/s [2024-12-09T14:16:14.957Z] 11240.21 IOPS, 43.91 MiB/s [2024-12-09T14:16:14.957Z] 11266.60 IOPS, 44.01 MiB/s 00:23:13.162 Latency(us) 00:23:13.162 [2024-12-09T14:16:14.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.162 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:13.162 Verification LBA range: start 0x0 length 0x4000 00:23:13.162 NVMe0n1 : 15.01 11266.34 44.01 833.29 0.00 10557.29 427.15 21346.01 00:23:13.162 [2024-12-09T14:16:14.957Z] =================================================================================================================== 00:23:13.162 [2024-12-09T14:16:14.957Z] Total : 11266.34 44.01 833.29 0.00 10557.29 427.15 21346.01 00:23:13.162 Received shutdown signal, test time was about 15.000000 seconds 00:23:13.162 00:23:13.162 Latency(us) 00:23:13.162 [2024-12-09T14:16:14.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.162 [2024-12-09T14:16:14.957Z] =================================================================================================================== 00:23:13.162 [2024-12-09T14:16:14.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1527375 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1527375 /var/tmp/bdevperf.sock 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1527375 ']' 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:13.162 [2024-12-09 15:16:14.617742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:13.162 [2024-12-09 15:16:14.814277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:13.162 15:16:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:13.428 NVMe0n1 00:23:13.685 15:16:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:13.685 00:23:13.942 15:16:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:14.198 00:23:14.198 15:16:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.198 15:16:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:14.198 15:16:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.455 15:16:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:17.729 15:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.729 15:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:17.729 15:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1528135 00:23:17.729 15:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.729 15:16:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1528135 00:23:19.096 { 00:23:19.096 "results": [ 00:23:19.096 { 00:23:19.096 "job": "NVMe0n1", 00:23:19.096 "core_mask": "0x1", 00:23:19.096 "workload": "verify", 00:23:19.096 "status": "finished", 00:23:19.097 "verify_range": { 00:23:19.097 "start": 0, 00:23:19.097 "length": 16384 00:23:19.097 }, 00:23:19.097 "queue_depth": 128, 00:23:19.097 "io_size": 4096, 00:23:19.097 "runtime": 1.012541, 00:23:19.097 "iops": 11492.867943125266, 00:23:19.097 "mibps": 44.89401540283307, 00:23:19.097 "io_failed": 0, 00:23:19.097 "io_timeout": 0, 00:23:19.097 "avg_latency_us": 11093.573099268753, 00:23:19.097 "min_latency_us": 2371.7790476190476, 00:23:19.097 "max_latency_us": 9112.624761904763 00:23:19.097 } 00:23:19.097 ], 00:23:19.097 "core_count": 1 00:23:19.097 } 00:23:19.097 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.097 [2024-12-09 15:16:14.245545] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:19.097 [2024-12-09 15:16:14.245599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527375 ] 00:23:19.097 [2024-12-09 15:16:14.319313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.097 [2024-12-09 15:16:14.355551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.097 [2024-12-09 15:16:16.139415] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:19.097 [2024-12-09 15:16:16.139460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.097 [2024-12-09 15:16:16.139471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.097 [2024-12-09 15:16:16.139480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.097 [2024-12-09 15:16:16.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.097 [2024-12-09 15:16:16.139495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.097 [2024-12-09 15:16:16.139505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.097 [2024-12-09 15:16:16.139512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.097 [2024-12-09 15:16:16.139519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.097 [2024-12-09 15:16:16.139526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:19.097 [2024-12-09 15:16:16.139553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:19.097 [2024-12-09 15:16:16.139568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b8d0 (9): Bad file descriptor 00:23:19.097 [2024-12-09 15:16:16.142243] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:19.097 Running I/O for 1 seconds... 00:23:19.097 11420.00 IOPS, 44.61 MiB/s 00:23:19.097 Latency(us) 00:23:19.097 [2024-12-09T14:16:20.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.097 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.097 Verification LBA range: start 0x0 length 0x4000 00:23:19.097 NVMe0n1 : 1.01 11492.87 44.89 0.00 0.00 11093.57 2371.78 9112.62 00:23:19.097 [2024-12-09T14:16:20.892Z] =================================================================================================================== 00:23:19.097 [2024-12-09T14:16:20.892Z] Total : 11492.87 44.89 0.00 0.00 11093.57 2371.78 9112.62 00:23:19.097 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.097 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:19.097 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.352 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.352 15:16:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:19.352 15:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.609 15:16:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1527375 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1527375 ']' 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1527375 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527375 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527375' 00:23:22.883 killing process with pid 1527375 00:23:22.883 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1527375 00:23:22.884 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1527375 00:23:23.140 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:23.141 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.398 15:16:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.398 rmmod nvme_tcp 00:23:23.398 rmmod nvme_fabrics 00:23:23.398 rmmod nvme_keyring 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1524409 ']' 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1524409 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1524409 ']' 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1524409 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1524409 00:23:23.398 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.399 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.399 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1524409' 00:23:23.399 killing process with pid 1524409 00:23:23.399 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1524409 00:23:23.399 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1524409 00:23:23.656 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.656 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.656 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.656 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:23.656 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.657 15:16:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.561 15:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.561 00:23:25.561 real 0m37.460s 00:23:25.561 user 1m58.790s 00:23:25.561 sys 0m7.785s 00:23:25.561 15:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.561 15:16:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 ************************************ 00:23:25.561 END TEST nvmf_failover 00:23:25.561 ************************************ 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.820 ************************************ 00:23:25.820 START TEST nvmf_host_discovery 00:23:25.820 ************************************ 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:25.820 * Looking for test storage... 00:23:25.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.820 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.821 --rc genhtml_branch_coverage=1 00:23:25.821 --rc genhtml_function_coverage=1 00:23:25.821 --rc genhtml_legend=1 00:23:25.821 --rc geninfo_all_blocks=1 00:23:25.821 --rc geninfo_unexecuted_blocks=1 00:23:25.821 00:23:25.821 ' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.821 --rc genhtml_branch_coverage=1 00:23:25.821 --rc genhtml_function_coverage=1 00:23:25.821 --rc genhtml_legend=1 00:23:25.821 --rc geninfo_all_blocks=1 00:23:25.821 --rc geninfo_unexecuted_blocks=1 00:23:25.821 00:23:25.821 ' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.821 --rc genhtml_branch_coverage=1 00:23:25.821 --rc genhtml_function_coverage=1 00:23:25.821 --rc genhtml_legend=1 00:23:25.821 --rc geninfo_all_blocks=1 00:23:25.821 --rc geninfo_unexecuted_blocks=1 00:23:25.821 00:23:25.821 ' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.821 --rc genhtml_branch_coverage=1 00:23:25.821 --rc genhtml_function_coverage=1 00:23:25.821 --rc genhtml_legend=1 00:23:25.821 --rc geninfo_all_blocks=1 00:23:25.821 --rc geninfo_unexecuted_blocks=1 00:23:25.821 00:23:25.821 ' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.821 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.080 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.080 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.080 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.080 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.081 15:16:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.653 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:32.654 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:32.654 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:32.654 Found net devices under 0000:af:00.0: cvl_0_0 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:32.654 Found net devices under 0000:af:00.1: cvl_0_1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:23:32.654 00:23:32.654 --- 10.0.0.2 ping statistics --- 00:23:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.654 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:23:32.654 00:23:32.654 --- 10.0.0.1 ping statistics --- 00:23:32.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.654 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1532514 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1532514 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1532514 ']' 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.654 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.654 [2024-12-09 15:16:33.662208] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:32.654 [2024-12-09 15:16:33.662257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.654 [2024-12-09 15:16:33.735905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.654 [2024-12-09 15:16:33.774431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.655 [2024-12-09 15:16:33.774467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.655 [2024-12-09 15:16:33.774474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.655 [2024-12-09 15:16:33.774481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.655 [2024-12-09 15:16:33.774486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.655 [2024-12-09 15:16:33.775005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 [2024-12-09 15:16:33.909886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 [2024-12-09 15:16:33.922048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 null0 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 null1 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1532711 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1532711 /tmp/host.sock 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1532711 ']' 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:32.655 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.655 15:16:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 [2024-12-09 15:16:33.997169] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:32.655 [2024-12-09 15:16:33.997210] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532711 ] 00:23:32.655 [2024-12-09 15:16:34.069567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.655 [2024-12-09 15:16:34.109740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.655 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 [2024-12-09 15:16:34.523579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.914 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:32.915 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.173 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:33.173 15:16:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:33.741 [2024-12-09 15:16:35.268722] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.741 [2024-12-09 15:16:35.268741] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.741 [2024-12-09 15:16:35.268754] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.741 [2024-12-09 15:16:35.355004] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:33.741 [2024-12-09 15:16:35.456662] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:33.741 [2024-12-09 15:16:35.457271] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2202260:1 started. 00:23:33.741 [2024-12-09 15:16:35.458617] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:33.741 [2024-12-09 15:16:35.458633] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:33.741 [2024-12-09 15:16:35.466531] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2202260 was disconnected and freed. delete nvme_qpair. 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:34.071 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.382 15:16:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.382 [2024-12-09 15:16:36.095802] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2202440:1 started. 00:23:34.382 [2024-12-09 15:16:36.098154] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2202440 was disconnected and freed. delete nvme_qpair. 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:34.382 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.640 [2024-12-09 15:16:36.180086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.640 [2024-12-09 15:16:36.181094] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:34.640 [2024-12-09 15:16:36.181114] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.640 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:34.641 [2024-12-09 15:16:36.268684] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:34.641 15:16:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:34.899 [2024-12-09 15:16:36.576059] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:34.899 [2024-12-09 15:16:36.576092] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:34.899 [2024-12-09 15:16:36.576100] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:34.899 [2024-12-09 15:16:36.576105] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.834 [2024-12-09 15:16:37.436304] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:35.834 [2024-12-09 15:16:37.436324] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.834 [2024-12-09 15:16:37.440147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.834 [2024-12-09 15:16:37.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.834 [2024-12-09 15:16:37.440173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.834 [2024-12-09 15:16:37.440180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.834 [2024-12-09 15:16:37.440188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.834 [2024-12-09 15:16:37.440194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.834 [2024-12-09 15:16:37.440206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.834 [2024-12-09 15:16:37.440212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.834 [2024-12-09 15:16:37.440222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:35.834 [2024-12-09 15:16:37.450162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.834 [2024-12-09 15:16:37.460198] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.834 [2024-12-09 15:16:37.460209] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.834 [2024-12-09 15:16:37.460215] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.834 [2024-12-09 15:16:37.460224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.834 [2024-12-09 15:16:37.460241] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.834 [2024-12-09 15:16:37.460414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.834 [2024-12-09 15:16:37.460430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.834 [2024-12-09 15:16:37.460439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.834 [2024-12-09 15:16:37.460452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.834 [2024-12-09 15:16:37.460476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.834 [2024-12-09 15:16:37.460484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.834 [2024-12-09 15:16:37.460492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.834 [2024-12-09 15:16:37.460499] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.834 [2024-12-09 15:16:37.460503] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.834 [2024-12-09 15:16:37.460511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.834 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.834 [2024-12-09 15:16:37.470273] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.834 [2024-12-09 15:16:37.470283] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.834 [2024-12-09 15:16:37.470288] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.470292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.470306] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.470476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.470488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.470496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.835 [2024-12-09 15:16:37.470507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.835 [2024-12-09 15:16:37.470523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.835 [2024-12-09 15:16:37.470530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.835 [2024-12-09 15:16:37.470537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.835 [2024-12-09 15:16:37.470543] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.835 [2024-12-09 15:16:37.470548] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.835 [2024-12-09 15:16:37.470551] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.835 [2024-12-09 15:16:37.480337] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.835 [2024-12-09 15:16:37.480349] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.835 [2024-12-09 15:16:37.480353] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.480357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.480373] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.480520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.480532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.480539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.835 [2024-12-09 15:16:37.480550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.835 [2024-12-09 15:16:37.480596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.835 [2024-12-09 15:16:37.480606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.835 [2024-12-09 15:16:37.480613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.835 [2024-12-09 15:16:37.480619] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.835 [2024-12-09 15:16:37.480628] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.835 [2024-12-09 15:16:37.480632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:35.835 [2024-12-09 15:16:37.490403] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.835 [2024-12-09 15:16:37.490416] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.835 [2024-12-09 15:16:37.490420] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.490424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.490439] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:35.835 [2024-12-09 15:16:37.490713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.490728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.490736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.835 [2024-12-09 15:16:37.490748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.835 [2024-12-09 15:16:37.490764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.835 [2024-12-09 15:16:37.490771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.835 [2024-12-09 15:16:37.490778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.835 [2024-12-09 15:16:37.490784] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.835 [2024-12-09 15:16:37.490788] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.835 [2024-12-09 15:16:37.490792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.835 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:35.835 [2024-12-09 15:16:37.500470] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.835 [2024-12-09 15:16:37.500488] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.835 [2024-12-09 15:16:37.500493] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.500497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.500514] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.500611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.500626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.500634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.835 [2024-12-09 15:16:37.500645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.835 [2024-12-09 15:16:37.500673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.835 [2024-12-09 15:16:37.500681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.835 [2024-12-09 15:16:37.500689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.835 [2024-12-09 15:16:37.500696] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.835 [2024-12-09 15:16:37.500701] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.835 [2024-12-09 15:16:37.500705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.835 [2024-12-09 15:16:37.510544] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.835 [2024-12-09 15:16:37.510554] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.835 [2024-12-09 15:16:37.510558] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.510562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.510577] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.510727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.510740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.510748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.835 [2024-12-09 15:16:37.510759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.835 [2024-12-09 15:16:37.510769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.835 [2024-12-09 15:16:37.510776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.835 [2024-12-09 15:16:37.510783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.835 [2024-12-09 15:16:37.510789] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.835 [2024-12-09 15:16:37.510794] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.835 [2024-12-09 15:16:37.510797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.835 [2024-12-09 15:16:37.520607] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:35.835 [2024-12-09 15:16:37.520616] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:35.835 [2024-12-09 15:16:37.520620] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.520623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.835 [2024-12-09 15:16:37.520637] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.835 [2024-12-09 15:16:37.520749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.835 [2024-12-09 15:16:37.520760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d2710 with addr=10.0.0.2, port=4420 00:23:35.835 [2024-12-09 15:16:37.520767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2710 is same with the state(6) to be set 00:23:35.836 [2024-12-09 15:16:37.520777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d2710 (9): Bad file descriptor 00:23:35.836 [2024-12-09 15:16:37.520791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.836 [2024-12-09 15:16:37.520798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.836 [2024-12-09 15:16:37.520804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.836 [2024-12-09 15:16:37.520810] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.836 [2024-12-09 15:16:37.520814] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.836 [2024-12-09 15:16:37.520817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.836 [2024-12-09 15:16:37.523319] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:35.836 [2024-12-09 15:16:37.523332] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.836 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.094 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.095 15:16:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.468 [2024-12-09 15:16:38.837379] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:37.468 [2024-12-09 15:16:38.837395] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:37.468 [2024-12-09 15:16:38.837406] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:37.468 [2024-12-09 15:16:38.924658] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:37.468 [2024-12-09 15:16:39.032391] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:37.468 [2024-12-09 15:16:39.032942] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2339940:1 started. 00:23:37.468 [2024-12-09 15:16:39.034467] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:37.468 [2024-12-09 15:16:39.034492] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.468 [2024-12-09 15:16:39.035718] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2339940 was disconnected and freed. delete nvme_qpair. 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.468 request: 00:23:37.468 { 00:23:37.468 "name": "nvme", 00:23:37.468 "trtype": "tcp", 00:23:37.468 "traddr": "10.0.0.2", 00:23:37.468 "adrfam": "ipv4", 00:23:37.468 "trsvcid": "8009", 00:23:37.468 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:37.468 "wait_for_attach": true, 00:23:37.468 "method": "bdev_nvme_start_discovery", 00:23:37.468 "req_id": 1 00:23:37.468 } 00:23:37.468 Got JSON-RPC error response 00:23:37.468 response: 00:23:37.468 { 00:23:37.468 "code": -17, 00:23:37.468 "message": "File exists" 00:23:37.468 } 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.468 request: 00:23:37.468 { 00:23:37.468 "name": "nvme_second", 00:23:37.468 "trtype": "tcp", 00:23:37.468 "traddr": "10.0.0.2", 00:23:37.468 "adrfam": "ipv4", 00:23:37.468 "trsvcid": "8009", 00:23:37.468 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:37.468 "wait_for_attach": true, 00:23:37.468 "method": "bdev_nvme_start_discovery", 00:23:37.468 "req_id": 1 00:23:37.468 } 00:23:37.468 Got JSON-RPC error response 00:23:37.468 response: 00:23:37.468 { 00:23:37.468 "code": -17, 00:23:37.468 "message": "File exists" 00:23:37.468 } 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.468 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.469 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.726 15:16:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.658 [2024-12-09 15:16:40.277912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:38.658 [2024-12-09 15:16:40.277939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2338010 with addr=10.0.0.2, port=8010 00:23:38.658 [2024-12-09 15:16:40.277954] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:38.658 [2024-12-09 15:16:40.277961] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:38.658 [2024-12-09 15:16:40.277967] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:39.591 [2024-12-09 15:16:41.280290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.591 [2024-12-09 15:16:41.280315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2338010 with addr=10.0.0.2, port=8010 00:23:39.591 [2024-12-09 15:16:41.280327] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:39.591 [2024-12-09 15:16:41.280334] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:39.591 [2024-12-09 15:16:41.280340] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:40.523 [2024-12-09 15:16:42.282525] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:40.523 request: 00:23:40.523 { 00:23:40.523 "name": "nvme_second", 00:23:40.523 "trtype": "tcp", 00:23:40.523 "traddr": "10.0.0.2", 00:23:40.523 "adrfam": "ipv4", 00:23:40.523 "trsvcid": "8010", 00:23:40.523 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:40.523 "wait_for_attach": false, 00:23:40.523 "attach_timeout_ms": 3000, 00:23:40.523 "method": "bdev_nvme_start_discovery", 00:23:40.523 "req_id": 1 00:23:40.523 } 00:23:40.523 Got JSON-RPC error response 00:23:40.523 response: 00:23:40.523 { 00:23:40.523 "code": -110, 00:23:40.523 "message": "Connection timed out" 00:23:40.523 } 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:40.523 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1532711 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.781 rmmod nvme_tcp 00:23:40.781 rmmod nvme_fabrics 00:23:40.781 rmmod nvme_keyring 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1532514 ']' 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1532514 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1532514 ']' 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1532514 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532514 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532514' 00:23:40.781 killing process with pid 1532514 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1532514 00:23:40.781 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1532514 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.040 15:16:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.942 00:23:42.942 real 0m17.266s 00:23:42.942 user 0m20.539s 00:23:42.942 sys 0m5.848s 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.942 ************************************ 00:23:42.942 END TEST nvmf_host_discovery 00:23:42.942 ************************************ 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.942 15:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.202 ************************************ 00:23:43.202 START TEST nvmf_host_multipath_status 00:23:43.202 ************************************ 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:43.202 * Looking for test storage... 00:23:43.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.202 --rc genhtml_branch_coverage=1 00:23:43.202 --rc genhtml_function_coverage=1 00:23:43.202 --rc genhtml_legend=1 00:23:43.202 --rc geninfo_all_blocks=1 00:23:43.202 --rc geninfo_unexecuted_blocks=1 00:23:43.202 00:23:43.202 ' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.202 --rc genhtml_branch_coverage=1 00:23:43.202 --rc genhtml_function_coverage=1 00:23:43.202 --rc genhtml_legend=1 00:23:43.202 --rc geninfo_all_blocks=1 00:23:43.202 --rc geninfo_unexecuted_blocks=1 00:23:43.202 00:23:43.202 ' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.202 --rc genhtml_branch_coverage=1 00:23:43.202 --rc genhtml_function_coverage=1 00:23:43.202 --rc genhtml_legend=1 00:23:43.202 --rc geninfo_all_blocks=1 00:23:43.202 --rc geninfo_unexecuted_blocks=1 00:23:43.202 00:23:43.202 ' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.202 --rc genhtml_branch_coverage=1 00:23:43.202 --rc genhtml_function_coverage=1 00:23:43.202 --rc genhtml_legend=1 00:23:43.202 --rc geninfo_all_blocks=1 00:23:43.202 --rc geninfo_unexecuted_blocks=1 00:23:43.202 00:23:43.202 ' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.202 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.203 15:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.768 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:49.769 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:49.769 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:49.769 Found net devices under 0000:af:00.0: cvl_0_0 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:49.769 Found net devices under 0000:af:00.1: cvl_0_1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:49.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:23:49.769 00:23:49.769 --- 10.0.0.2 ping statistics --- 00:23:49.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.769 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:49.769 00:23:49.769 --- 10.0.0.1 ping statistics --- 00:23:49.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.769 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1537675 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1537675 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1537675 ']' 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.769 15:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.769 [2024-12-09 15:16:50.977608] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:23:49.769 [2024-12-09 15:16:50.977661] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.769 [2024-12-09 15:16:51.058377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:49.769 [2024-12-09 15:16:51.096777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.769 [2024-12-09 15:16:51.096816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.769 [2024-12-09 15:16:51.096823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.770 [2024-12-09 15:16:51.096830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.770 [2024-12-09 15:16:51.096835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.770 [2024-12-09 15:16:51.097955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.770 [2024-12-09 15:16:51.097955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1537675 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.770 [2024-12-09 15:16:51.406534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.770 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:50.027 Malloc0 00:23:50.027 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:50.284 15:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.284 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.542 [2024-12-09 15:16:52.222065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.542 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:50.799 [2024-12-09 15:16:52.430568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1537995 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1537995 /var/tmp/bdevperf.sock 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1537995 ']' 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.799 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:51.057 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.057 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:51.057 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:51.315 15:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:51.572 Nvme0n1 00:23:51.572 15:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:52.139 Nvme0n1 00:23:52.139 15:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:52.139 15:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.037 15:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:54.037 15:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:54.296 15:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.555 15:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:55.492 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:55.492 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.492 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.492 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.751 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.751 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.751 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.751 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.009 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.010 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.010 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.010 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.268 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.268 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.268 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.268 15:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.268 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.268 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.268 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.268 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.527 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.527 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.527 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.527 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.785 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.785 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:56.785 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:57.044 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.302 15:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:58.235 15:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:58.235 15:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:58.235 15:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.235 15:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.494 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.753 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.753 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.753 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.753 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.011 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.011 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.011 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.011 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.269 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.269 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.269 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.269 15:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.528 15:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.528 15:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:59.528 15:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.785 15:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:59.785 15:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.158 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.415 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.415 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.415 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.415 15:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.415 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.415 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.415 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.415 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.673 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.673 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.673 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.673 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.931 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.931 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.931 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.931 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.190 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.190 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:02.190 15:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.449 15:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.708 15:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:03.643 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:03.643 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.643 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.643 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.902 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.161 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.161 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.161 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.161 15:17:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.420 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.420 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.420 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.420 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.678 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.678 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.678 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.678 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.938 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.938 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:04.938 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:05.198 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:05.198 15:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:06.574 15:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:06.574 15:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:06.574 15:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.574 15:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.574 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.834 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.834 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.834 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.834 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.092 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.092 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:07.092 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.092 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.351 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.351 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:07.351 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.351 15:17:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.351 15:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:07.351 15:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:07.351 15:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:07.609 15:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.867 15:17:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:08.802 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:08.802 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.802 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.802 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.060 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.060 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:09.060 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.060 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.318 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.318 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.318 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.318 15:17:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.318 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.318 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.318 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.318 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.577 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.577 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:09.577 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.577 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.836 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.836 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.836 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.836 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.095 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.095 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:10.353 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:10.353 15:17:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:10.353 15:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.611 15:17:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:11.581 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:11.581 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.581 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.581 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.839 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.839 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:11.839 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.839 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.097 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.097 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.097 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.097 15:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.355 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.355 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.355 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.355 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.613 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.613 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:12.613 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.613 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:12.871 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:13.129 15:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:13.388 15:17:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:14.321 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:14.321 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:14.321 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.321 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.579 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.579 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:14.579 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.579 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.837 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.837 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.837 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.837 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.095 15:17:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.353 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.353 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:15.353 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.353 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.611 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.611 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:15.612 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.870 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:16.127 15:17:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:17.061 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:17.061 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:17.061 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.061 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.319 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.319 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:17.319 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.319 15:17:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.577 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.925 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.925 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.925 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.925 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.210 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.210 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:18.210 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.210 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.211 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.211 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:18.211 15:17:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.468 15:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:18.727 15:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:19.663 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:19.663 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.663 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.663 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.922 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.922 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:19.922 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.922 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.181 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.181 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.181 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.181 15:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.440 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.440 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.440 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.440 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.699 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.958 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.958 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1537995 00:24:20.958 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1537995 ']' 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1537995 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537995 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537995' 00:24:20.959 killing process with pid 1537995 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1537995 00:24:20.959 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1537995 00:24:20.959 { 00:24:20.959 "results": [ 00:24:20.959 { 00:24:20.959 "job": "Nvme0n1", 00:24:20.959 "core_mask": "0x4", 00:24:20.959 "workload": "verify", 00:24:20.959 "status": "terminated", 00:24:20.959 "verify_range": { 00:24:20.959 "start": 0, 00:24:20.959 "length": 16384 00:24:20.959 }, 00:24:20.959 "queue_depth": 128, 00:24:20.959 "io_size": 4096, 00:24:20.959 "runtime": 28.814064, 00:24:20.959 "iops": 10751.069338917274, 00:24:20.959 "mibps": 41.9963646051456, 00:24:20.959 "io_failed": 0, 00:24:20.959 "io_timeout": 0, 00:24:20.959 "avg_latency_us": 11885.780535239683, 00:24:20.959 "min_latency_us": 249.66095238095238, 00:24:20.959 "max_latency_us": 3019898.88 00:24:20.959 } 00:24:20.959 ], 00:24:20.959 "core_count": 1 00:24:20.959 } 00:24:21.220 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1537995 00:24:21.220 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:21.220 [2024-12-09 15:16:52.505840] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:24:21.220 [2024-12-09 15:16:52.505891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537995 ] 00:24:21.220 [2024-12-09 15:16:52.576768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.220 [2024-12-09 15:16:52.616186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.220 Running I/O for 90 seconds... 00:24:21.220 11482.00 IOPS, 44.85 MiB/s [2024-12-09T14:17:23.015Z] 11528.50 IOPS, 45.03 MiB/s [2024-12-09T14:17:23.015Z] 11583.33 IOPS, 45.25 MiB/s [2024-12-09T14:17:23.015Z] 11559.25 IOPS, 45.15 MiB/s [2024-12-09T14:17:23.015Z] 11573.60 IOPS, 45.21 MiB/s [2024-12-09T14:17:23.015Z] 11579.67 IOPS, 45.23 MiB/s [2024-12-09T14:17:23.015Z] 11562.57 IOPS, 45.17 MiB/s [2024-12-09T14:17:23.015Z] 11581.12 IOPS, 45.24 MiB/s [2024-12-09T14:17:23.015Z] 11596.22 IOPS, 45.30 MiB/s [2024-12-09T14:17:23.015Z] 11595.50 IOPS, 45.29 MiB/s [2024-12-09T14:17:23.015Z] 11599.91 IOPS, 45.31 MiB/s [2024-12-09T14:17:23.015Z] 11609.25 IOPS, 45.35 MiB/s [2024-12-09T14:17:23.015Z] [2024-12-09 15:17:06.719119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.220 [2024-12-09 15:17:06.719158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:21.220 [2024-12-09 15:17:06.719195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.220 [2024-12-09 15:17:06.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:21.220 [2024-12-09 15:17:06.719221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.220 [2024-12-09 15:17:06.719229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:21.220 [2024-12-09 15:17:06.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.719492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.719505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.221 [2024-12-09 15:17:06.719513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.720983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.720997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.221 [2024-12-09 15:17:06.721680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:21.221 [2024-12-09 15:17:06.721695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.721989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.721996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.222 [2024-12-09 15:17:06.722656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:21.222 [2024-12-09 15:17:06.722672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.722978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.722985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:06.723211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:06.723221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:21.223 11482.23 IOPS, 44.85 MiB/s [2024-12-09T14:17:23.018Z] 10662.07 IOPS, 41.65 MiB/s [2024-12-09T14:17:23.018Z] 9951.27 IOPS, 38.87 MiB/s [2024-12-09T14:17:23.018Z] 9437.12 IOPS, 36.86 MiB/s [2024-12-09T14:17:23.018Z] 9557.94 IOPS, 37.34 MiB/s [2024-12-09T14:17:23.018Z] 9672.50 IOPS, 37.78 MiB/s [2024-12-09T14:17:23.018Z] 9854.79 IOPS, 38.50 MiB/s [2024-12-09T14:17:23.018Z] 10053.75 IOPS, 39.27 MiB/s [2024-12-09T14:17:23.018Z] 10215.67 IOPS, 39.90 MiB/s [2024-12-09T14:17:23.018Z] 10271.50 IOPS, 40.12 MiB/s [2024-12-09T14:17:23.018Z] 10320.00 IOPS, 40.31 MiB/s [2024-12-09T14:17:23.018Z] 10383.42 IOPS, 40.56 MiB/s [2024-12-09T14:17:23.018Z] 10508.64 IOPS, 41.05 MiB/s [2024-12-09T14:17:23.018Z] 10620.62 IOPS, 41.49 MiB/s [2024-12-09T14:17:23.018Z] [2024-12-09 15:17:20.389596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.389784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.389791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.391241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.391264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.391280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.391289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.391301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.223 [2024-12-09 15:17:20.391308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.391320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.223 [2024-12-09 15:17:20.391327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:21.223 [2024-12-09 15:17:20.391339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.223 [2024-12-09 15:17:20.391355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.391723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.391852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.391859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.392339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.224 [2024-12-09 15:17:20.392355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.392370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.392377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.392398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.392410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.392417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:21.224 [2024-12-09 15:17:20.392430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.224 [2024-12-09 15:17:20.392437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:21.224 10697.96 IOPS, 41.79 MiB/s [2024-12-09T14:17:23.019Z] 10728.21 IOPS, 41.91 MiB/s [2024-12-09T14:17:23.019Z] Received shutdown signal, test time was about 28.814697 seconds 00:24:21.224 00:24:21.224 Latency(us) 00:24:21.224 [2024-12-09T14:17:23.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.224 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:21.224 Verification LBA range: start 0x0 length 0x4000 00:24:21.224 Nvme0n1 : 28.81 10751.07 42.00 0.00 0.00 11885.78 249.66 3019898.88 00:24:21.224 [2024-12-09T14:17:23.019Z] =================================================================================================================== 00:24:21.224 [2024-12-09T14:17:23.019Z] Total : 10751.07 42.00 0.00 0.00 11885.78 249.66 3019898.88 00:24:21.224 15:17:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.484 rmmod nvme_tcp 00:24:21.484 rmmod nvme_fabrics 00:24:21.484 rmmod nvme_keyring 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1537675 ']' 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1537675 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1537675 ']' 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1537675 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537675 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537675' 00:24:21.484 killing process with pid 1537675 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1537675 00:24:21.484 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1537675 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:21.743 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.744 15:17:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.650 15:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.650 00:24:23.650 real 0m40.680s 00:24:23.650 user 1m50.284s 00:24:23.650 sys 0m11.538s 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.909 ************************************ 00:24:23.909 END TEST nvmf_host_multipath_status 00:24:23.909 ************************************ 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.909 ************************************ 00:24:23.909 START TEST nvmf_discovery_remove_ifc 00:24:23.909 ************************************ 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:23.909 * Looking for test storage... 00:24:23.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.909 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:23.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:23.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.910 --rc genhtml_branch_coverage=1 00:24:23.910 --rc genhtml_function_coverage=1 00:24:23.910 --rc genhtml_legend=1 00:24:23.910 --rc geninfo_all_blocks=1 00:24:23.910 --rc geninfo_unexecuted_blocks=1 00:24:23.910 00:24:23.910 ' 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.910 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.169 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.170 15:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.739 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:30.740 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:30.740 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:30.740 Found net devices under 0000:af:00.0: cvl_0_0 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:30.740 Found net devices under 0000:af:00.1: cvl_0_1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:24:30.740 00:24:30.740 --- 10.0.0.2 ping statistics --- 00:24:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.740 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:30.740 00:24:30.740 --- 10.0.0.1 ping statistics --- 00:24:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.740 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1546448 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1546448 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1546448 ']' 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.740 [2024-12-09 15:17:31.700555] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:24:30.740 [2024-12-09 15:17:31.700597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.740 [2024-12-09 15:17:31.776667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.740 [2024-12-09 15:17:31.815344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.740 [2024-12-09 15:17:31.815379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.740 [2024-12-09 15:17:31.815386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.740 [2024-12-09 15:17:31.815392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.740 [2024-12-09 15:17:31.815397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.740 [2024-12-09 15:17:31.815928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.740 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.741 15:17:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.741 [2024-12-09 15:17:31.959546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.741 [2024-12-09 15:17:31.967693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:30.741 null0 00:24:30.741 [2024-12-09 15:17:31.999695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1546552 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1546552 /tmp/host.sock 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1546552 ']' 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:30.741 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.741 [2024-12-09 15:17:32.068643] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:24:30.741 [2024-12-09 15:17:32.068684] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546552 ] 00:24:30.741 [2024-12-09 15:17:32.141299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.741 [2024-12-09 15:17:32.182272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.741 15:17:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.677 [2024-12-09 15:17:33.367371] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:31.677 [2024-12-09 15:17:33.367391] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:31.677 [2024-12-09 15:17:33.367404] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:31.677 [2024-12-09 15:17:33.453666] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:31.935 [2024-12-09 15:17:33.596459] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:31.935 [2024-12-09 15:17:33.597137] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x150a210:1 started. 00:24:31.935 [2024-12-09 15:17:33.598438] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:31.935 [2024-12-09 15:17:33.598478] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:31.935 [2024-12-09 15:17:33.598498] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:31.935 [2024-12-09 15:17:33.598510] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:31.935 [2024-12-09 15:17:33.598528] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.935 [2024-12-09 15:17:33.604953] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x150a210 was disconnected and freed. delete nvme_qpair. 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:31.935 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:32.203 15:17:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:33.138 15:17:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.072 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.331 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.331 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:34.331 15:17:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.266 15:17:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.202 15:17:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.460 15:17:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.460 15:17:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.395 [2024-12-09 15:17:39.040112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:37.395 [2024-12-09 15:17:39.040145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.395 [2024-12-09 15:17:39.040155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.395 [2024-12-09 15:17:39.040163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.395 [2024-12-09 15:17:39.040169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.395 [2024-12-09 15:17:39.040180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.395 [2024-12-09 15:17:39.040187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.395 [2024-12-09 15:17:39.040195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.395 [2024-12-09 15:17:39.040201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.395 [2024-12-09 15:17:39.040209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.395 [2024-12-09 15:17:39.040216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.395 [2024-12-09 15:17:39.040227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a10 is same with the state(6) to be set 00:24:37.395 [2024-12-09 15:17:39.050135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e6a10 (9): Bad file descriptor 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.395 15:17:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.395 [2024-12-09 15:17:39.060169] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:37.395 [2024-12-09 15:17:39.060183] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:37.395 [2024-12-09 15:17:39.060189] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:37.395 [2024-12-09 15:17:39.060193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:37.395 [2024-12-09 15:17:39.060213] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.330 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.330 [2024-12-09 15:17:40.122277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:38.330 [2024-12-09 15:17:40.122361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e6a10 with addr=10.0.0.2, port=4420 00:24:38.330 [2024-12-09 15:17:40.122395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a10 is same with the state(6) to be set 00:24:38.330 [2024-12-09 15:17:40.122452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e6a10 (9): Bad file descriptor 00:24:38.330 [2024-12-09 15:17:40.123409] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:38.330 [2024-12-09 15:17:40.123473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.330 [2024-12-09 15:17:40.123495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.330 [2024-12-09 15:17:40.123518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.330 [2024-12-09 15:17:40.123548] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.330 [2024-12-09 15:17:40.123564] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.330 [2024-12-09 15:17:40.123578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.330 [2024-12-09 15:17:40.123601] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.330 [2024-12-09 15:17:40.123616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.588 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.588 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.588 15:17:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.523 [2024-12-09 15:17:41.126133] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:39.523 [2024-12-09 15:17:41.126154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:39.523 [2024-12-09 15:17:41.126165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:39.523 [2024-12-09 15:17:41.126172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:39.523 [2024-12-09 15:17:41.126179] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:39.523 [2024-12-09 15:17:41.126185] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:39.523 [2024-12-09 15:17:41.126190] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:39.523 [2024-12-09 15:17:41.126194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:39.523 [2024-12-09 15:17:41.126213] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:39.523 [2024-12-09 15:17:41.126237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.523 [2024-12-09 15:17:41.126246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.523 [2024-12-09 15:17:41.126255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.523 [2024-12-09 15:17:41.126262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.523 [2024-12-09 15:17:41.126269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.523 [2024-12-09 15:17:41.126276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.523 [2024-12-09 15:17:41.126282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.523 [2024-12-09 15:17:41.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.523 [2024-12-09 15:17:41.126296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.523 [2024-12-09 15:17:41.126303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.523 [2024-12-09 15:17:41.126311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:39.523 [2024-12-09 15:17:41.126685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d5d20 (9): Bad file descriptor 00:24:39.523 [2024-12-09 15:17:41.127693] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:39.523 [2024-12-09 15:17:41.127704] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.523 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.782 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:39.782 15:17:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:40.717 15:17:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.653 [2024-12-09 15:17:43.140748] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:41.653 [2024-12-09 15:17:43.140768] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:41.653 [2024-12-09 15:17:43.140779] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.653 [2024-12-09 15:17:43.267144] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:41.653 [2024-12-09 15:17:43.321640] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:41.653 [2024-12-09 15:17:43.322177] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1513980:1 started. 00:24:41.653 [2024-12-09 15:17:43.323203] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:41.653 [2024-12-09 15:17:43.323247] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:41.653 [2024-12-09 15:17:43.323266] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:41.653 [2024-12-09 15:17:43.323278] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:41.653 [2024-12-09 15:17:43.323284] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.653 [2024-12-09 15:17:43.329935] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1513980 was disconnected and freed. delete nvme_qpair. 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1546552 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1546552 ']' 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1546552 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.653 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546552 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546552' 00:24:41.912 killing process with pid 1546552 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1546552 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1546552 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.912 rmmod nvme_tcp 00:24:41.912 rmmod nvme_fabrics 00:24:41.912 rmmod nvme_keyring 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1546448 ']' 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1546448 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1546448 ']' 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1546448 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.912 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546448 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546448' 00:24:42.172 killing process with pid 1546448 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1546448 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1546448 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.172 15:17:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.737 15:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.737 00:24:44.737 real 0m20.467s 00:24:44.737 user 0m24.705s 00:24:44.737 sys 0m5.793s 00:24:44.737 15:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.737 15:17:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.737 ************************************ 00:24:44.737 END TEST nvmf_discovery_remove_ifc 00:24:44.737 ************************************ 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.737 ************************************ 00:24:44.737 START TEST nvmf_identify_kernel_target 00:24:44.737 ************************************ 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:44.737 * Looking for test storage... 00:24:44.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:44.737 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:44.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.738 --rc genhtml_branch_coverage=1 00:24:44.738 --rc genhtml_function_coverage=1 00:24:44.738 --rc genhtml_legend=1 00:24:44.738 --rc geninfo_all_blocks=1 00:24:44.738 --rc geninfo_unexecuted_blocks=1 00:24:44.738 00:24:44.738 ' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:44.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.738 --rc genhtml_branch_coverage=1 00:24:44.738 --rc genhtml_function_coverage=1 00:24:44.738 --rc genhtml_legend=1 00:24:44.738 --rc geninfo_all_blocks=1 00:24:44.738 --rc geninfo_unexecuted_blocks=1 00:24:44.738 00:24:44.738 ' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:44.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.738 --rc genhtml_branch_coverage=1 00:24:44.738 --rc genhtml_function_coverage=1 00:24:44.738 --rc genhtml_legend=1 00:24:44.738 --rc geninfo_all_blocks=1 00:24:44.738 --rc geninfo_unexecuted_blocks=1 00:24:44.738 00:24:44.738 ' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:44.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.738 --rc genhtml_branch_coverage=1 00:24:44.738 --rc genhtml_function_coverage=1 00:24:44.738 --rc genhtml_legend=1 00:24:44.738 --rc geninfo_all_blocks=1 00:24:44.738 --rc geninfo_unexecuted_blocks=1 00:24:44.738 00:24:44.738 ' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.738 15:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.309 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:51.310 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:51.310 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:51.310 Found net devices under 0000:af:00.0: cvl_0_0 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:51.310 Found net devices under 0000:af:00.1: cvl_0_1 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.310 15:17:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:24:51.310 00:24:51.310 --- 10.0.0.2 ping statistics --- 00:24:51.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.310 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:51.310 00:24:51.310 --- 10.0.0.1 ping statistics --- 00:24:51.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.310 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:51.310 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:51.311 15:17:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:53.215 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:24:53.478 Waiting for block devices as requested 00:24:53.478 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:53.736 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:53.736 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:53.736 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:53.994 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:53.994 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:53.994 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:54.253 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:54.253 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:54.253 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:54.253 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:54.513 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:54.513 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:54.513 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:54.772 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:54.772 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:54.772 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:55.031 No valid GPT data, bailing 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:24:55.031 No valid GPT data, bailing 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:55.031 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:55.291 00:24:55.291 Discovery Log Number of Records 2, Generation counter 2 00:24:55.291 =====Discovery Log Entry 0====== 00:24:55.291 trtype: tcp 00:24:55.291 adrfam: ipv4 00:24:55.291 subtype: current discovery subsystem 00:24:55.291 treq: not specified, sq flow control disable supported 00:24:55.291 portid: 1 00:24:55.291 trsvcid: 4420 00:24:55.291 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:55.291 traddr: 10.0.0.1 00:24:55.291 eflags: none 00:24:55.291 sectype: none 00:24:55.291 =====Discovery Log Entry 1====== 00:24:55.291 trtype: tcp 00:24:55.291 adrfam: ipv4 00:24:55.291 subtype: nvme subsystem 00:24:55.291 treq: not specified, sq flow control disable supported 00:24:55.292 portid: 1 00:24:55.292 trsvcid: 4420 00:24:55.292 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:55.292 traddr: 10.0.0.1 00:24:55.292 eflags: none 00:24:55.292 sectype: none 00:24:55.292 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:55.292 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:55.292 ===================================================== 00:24:55.292 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:55.292 ===================================================== 00:24:55.292 Controller Capabilities/Features 00:24:55.292 ================================ 00:24:55.292 Vendor ID: 0000 00:24:55.292 Subsystem Vendor ID: 0000 00:24:55.292 Serial Number: 49e76a290bc7af82986d 00:24:55.292 Model Number: Linux 00:24:55.292 Firmware Version: 6.8.9-20 00:24:55.292 Recommended Arb Burst: 0 00:24:55.292 IEEE OUI Identifier: 00 00 00 00:24:55.292 Multi-path I/O 00:24:55.292 May have multiple subsystem ports: No 00:24:55.292 May have multiple controllers: No 00:24:55.292 Associated with SR-IOV VF: No 00:24:55.292 Max Data Transfer Size: Unlimited 00:24:55.292 Max Number of Namespaces: 0 00:24:55.292 Max Number of I/O Queues: 1024 00:24:55.292 NVMe Specification Version (VS): 1.3 00:24:55.292 NVMe Specification Version (Identify): 1.3 00:24:55.292 Maximum Queue Entries: 1024 00:24:55.292 Contiguous Queues Required: No 00:24:55.292 Arbitration Mechanisms Supported 00:24:55.292 Weighted Round Robin: Not Supported 00:24:55.292 Vendor Specific: Not Supported 00:24:55.292 Reset Timeout: 7500 ms 00:24:55.292 Doorbell Stride: 4 bytes 00:24:55.292 NVM Subsystem Reset: Not Supported 00:24:55.292 Command Sets Supported 00:24:55.292 NVM Command Set: Supported 00:24:55.292 Boot Partition: Not Supported 00:24:55.292 Memory Page Size Minimum: 4096 bytes 00:24:55.292 Memory Page Size Maximum: 4096 bytes 00:24:55.292 Persistent Memory Region: Not Supported 00:24:55.292 Optional Asynchronous Events Supported 00:24:55.292 Namespace Attribute Notices: Not Supported 00:24:55.292 Firmware Activation Notices: Not Supported 00:24:55.292 ANA Change Notices: Not Supported 00:24:55.292 PLE Aggregate Log Change Notices: Not Supported 00:24:55.292 LBA Status Info Alert Notices: Not Supported 00:24:55.292 EGE Aggregate Log Change Notices: Not Supported 00:24:55.292 Normal NVM Subsystem Shutdown event: Not Supported 00:24:55.292 Zone Descriptor Change Notices: Not Supported 00:24:55.292 Discovery Log Change Notices: Supported 00:24:55.292 Controller Attributes 00:24:55.292 128-bit Host Identifier: Not Supported 00:24:55.292 Non-Operational Permissive Mode: Not Supported 00:24:55.292 NVM Sets: Not Supported 00:24:55.292 Read Recovery Levels: Not Supported 00:24:55.292 Endurance Groups: Not Supported 00:24:55.292 Predictable Latency Mode: Not Supported 00:24:55.292 Traffic Based Keep ALive: Not Supported 00:24:55.292 Namespace Granularity: Not Supported 00:24:55.292 SQ Associations: Not Supported 00:24:55.292 UUID List: Not Supported 00:24:55.292 Multi-Domain Subsystem: Not Supported 00:24:55.292 Fixed Capacity Management: Not Supported 00:24:55.292 Variable Capacity Management: Not Supported 00:24:55.292 Delete Endurance Group: Not Supported 00:24:55.292 Delete NVM Set: Not Supported 00:24:55.292 Extended LBA Formats Supported: Not Supported 00:24:55.292 Flexible Data Placement Supported: Not Supported 00:24:55.292 00:24:55.292 Controller Memory Buffer Support 00:24:55.292 ================================ 00:24:55.292 Supported: No 00:24:55.292 00:24:55.292 Persistent Memory Region Support 00:24:55.292 ================================ 00:24:55.292 Supported: No 00:24:55.292 00:24:55.292 Admin Command Set Attributes 00:24:55.292 ============================ 00:24:55.292 Security Send/Receive: Not Supported 00:24:55.292 Format NVM: Not Supported 00:24:55.292 Firmware Activate/Download: Not Supported 00:24:55.292 Namespace Management: Not Supported 00:24:55.292 Device Self-Test: Not Supported 00:24:55.292 Directives: Not Supported 00:24:55.292 NVMe-MI: Not Supported 00:24:55.292 Virtualization Management: Not Supported 00:24:55.292 Doorbell Buffer Config: Not Supported 00:24:55.292 Get LBA Status Capability: Not Supported 00:24:55.292 Command & Feature Lockdown Capability: Not Supported 00:24:55.292 Abort Command Limit: 1 00:24:55.292 Async Event Request Limit: 1 00:24:55.292 Number of Firmware Slots: N/A 00:24:55.292 Firmware Slot 1 Read-Only: N/A 00:24:55.292 Firmware Activation Without Reset: N/A 00:24:55.292 Multiple Update Detection Support: N/A 00:24:55.292 Firmware Update Granularity: No Information Provided 00:24:55.292 Per-Namespace SMART Log: No 00:24:55.292 Asymmetric Namespace Access Log Page: Not Supported 00:24:55.292 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:55.292 Command Effects Log Page: Not Supported 00:24:55.292 Get Log Page Extended Data: Supported 00:24:55.292 Telemetry Log Pages: Not Supported 00:24:55.292 Persistent Event Log Pages: Not Supported 00:24:55.292 Supported Log Pages Log Page: May Support 00:24:55.292 Commands Supported & Effects Log Page: Not Supported 00:24:55.292 Feature Identifiers & Effects Log Page:May Support 00:24:55.292 NVMe-MI Commands & Effects Log Page: May Support 00:24:55.292 Data Area 4 for Telemetry Log: Not Supported 00:24:55.292 Error Log Page Entries Supported: 1 00:24:55.292 Keep Alive: Not Supported 00:24:55.292 00:24:55.292 NVM Command Set Attributes 00:24:55.292 ========================== 00:24:55.292 Submission Queue Entry Size 00:24:55.292 Max: 1 00:24:55.292 Min: 1 00:24:55.292 Completion Queue Entry Size 00:24:55.292 Max: 1 00:24:55.292 Min: 1 00:24:55.292 Number of Namespaces: 0 00:24:55.292 Compare Command: Not Supported 00:24:55.292 Write Uncorrectable Command: Not Supported 00:24:55.292 Dataset Management Command: Not Supported 00:24:55.292 Write Zeroes Command: Not Supported 00:24:55.292 Set Features Save Field: Not Supported 00:24:55.292 Reservations: Not Supported 00:24:55.292 Timestamp: Not Supported 00:24:55.292 Copy: Not Supported 00:24:55.292 Volatile Write Cache: Not Present 00:24:55.292 Atomic Write Unit (Normal): 1 00:24:55.292 Atomic Write Unit (PFail): 1 00:24:55.292 Atomic Compare & Write Unit: 1 00:24:55.292 Fused Compare & Write: Not Supported 00:24:55.292 Scatter-Gather List 00:24:55.292 SGL Command Set: Supported 00:24:55.292 SGL Keyed: Not Supported 00:24:55.292 SGL Bit Bucket Descriptor: Not Supported 00:24:55.292 SGL Metadata Pointer: Not Supported 00:24:55.292 Oversized SGL: Not Supported 00:24:55.292 SGL Metadata Address: Not Supported 00:24:55.292 SGL Offset: Supported 00:24:55.292 Transport SGL Data Block: Not Supported 00:24:55.292 Replay Protected Memory Block: Not Supported 00:24:55.292 00:24:55.292 Firmware Slot Information 00:24:55.292 ========================= 00:24:55.292 Active slot: 0 00:24:55.292 00:24:55.292 00:24:55.292 Error Log 00:24:55.292 ========= 00:24:55.292 00:24:55.292 Active Namespaces 00:24:55.292 ================= 00:24:55.292 Discovery Log Page 00:24:55.292 ================== 00:24:55.292 Generation Counter: 2 00:24:55.292 Number of Records: 2 00:24:55.292 Record Format: 0 00:24:55.292 00:24:55.292 Discovery Log Entry 0 00:24:55.292 ---------------------- 00:24:55.292 Transport Type: 3 (TCP) 00:24:55.292 Address Family: 1 (IPv4) 00:24:55.292 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:55.292 Entry Flags: 00:24:55.292 Duplicate Returned Information: 0 00:24:55.292 Explicit Persistent Connection Support for Discovery: 0 00:24:55.292 Transport Requirements: 00:24:55.292 Secure Channel: Not Specified 00:24:55.292 Port ID: 1 (0x0001) 00:24:55.292 Controller ID: 65535 (0xffff) 00:24:55.292 Admin Max SQ Size: 32 00:24:55.292 Transport Service Identifier: 4420 00:24:55.292 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:55.292 Transport Address: 10.0.0.1 00:24:55.292 Discovery Log Entry 1 00:24:55.292 ---------------------- 00:24:55.292 Transport Type: 3 (TCP) 00:24:55.292 Address Family: 1 (IPv4) 00:24:55.292 Subsystem Type: 2 (NVM Subsystem) 00:24:55.292 Entry Flags: 00:24:55.292 Duplicate Returned Information: 0 00:24:55.292 Explicit Persistent Connection Support for Discovery: 0 00:24:55.292 Transport Requirements: 00:24:55.292 Secure Channel: Not Specified 00:24:55.292 Port ID: 1 (0x0001) 00:24:55.292 Controller ID: 65535 (0xffff) 00:24:55.292 Admin Max SQ Size: 32 00:24:55.292 Transport Service Identifier: 4420 00:24:55.292 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:55.292 Transport Address: 10.0.0.1 00:24:55.292 15:17:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.292 get_feature(0x01) failed 00:24:55.293 get_feature(0x02) failed 00:24:55.293 get_feature(0x04) failed 00:24:55.293 ===================================================== 00:24:55.293 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.293 ===================================================== 00:24:55.293 Controller Capabilities/Features 00:24:55.293 ================================ 00:24:55.293 Vendor ID: 0000 00:24:55.293 Subsystem Vendor ID: 0000 00:24:55.293 Serial Number: 1527abe5feaa9c0ad780 00:24:55.293 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:55.293 Firmware Version: 6.8.9-20 00:24:55.293 Recommended Arb Burst: 6 00:24:55.293 IEEE OUI Identifier: 00 00 00 00:24:55.293 Multi-path I/O 00:24:55.293 May have multiple subsystem ports: Yes 00:24:55.293 May have multiple controllers: Yes 00:24:55.293 Associated with SR-IOV VF: No 00:24:55.293 Max Data Transfer Size: Unlimited 00:24:55.293 Max Number of Namespaces: 1024 00:24:55.293 Max Number of I/O Queues: 128 00:24:55.293 NVMe Specification Version (VS): 1.3 00:24:55.293 NVMe Specification Version (Identify): 1.3 00:24:55.293 Maximum Queue Entries: 1024 00:24:55.293 Contiguous Queues Required: No 00:24:55.293 Arbitration Mechanisms Supported 00:24:55.293 Weighted Round Robin: Not Supported 00:24:55.293 Vendor Specific: Not Supported 00:24:55.293 Reset Timeout: 7500 ms 00:24:55.293 Doorbell Stride: 4 bytes 00:24:55.293 NVM Subsystem Reset: Not Supported 00:24:55.293 Command Sets Supported 00:24:55.293 NVM Command Set: Supported 00:24:55.293 Boot Partition: Not Supported 00:24:55.293 Memory Page Size Minimum: 4096 bytes 00:24:55.293 Memory Page Size Maximum: 4096 bytes 00:24:55.293 Persistent Memory Region: Not Supported 00:24:55.293 Optional Asynchronous Events Supported 00:24:55.293 Namespace Attribute Notices: Supported 00:24:55.293 Firmware Activation Notices: Not Supported 00:24:55.293 ANA Change Notices: Supported 00:24:55.293 PLE Aggregate Log Change Notices: Not Supported 00:24:55.293 LBA Status Info Alert Notices: Not Supported 00:24:55.293 EGE Aggregate Log Change Notices: Not Supported 00:24:55.293 Normal NVM Subsystem Shutdown event: Not Supported 00:24:55.293 Zone Descriptor Change Notices: Not Supported 00:24:55.293 Discovery Log Change Notices: Not Supported 00:24:55.293 Controller Attributes 00:24:55.293 128-bit Host Identifier: Supported 00:24:55.293 Non-Operational Permissive Mode: Not Supported 00:24:55.293 NVM Sets: Not Supported 00:24:55.293 Read Recovery Levels: Not Supported 00:24:55.293 Endurance Groups: Not Supported 00:24:55.293 Predictable Latency Mode: Not Supported 00:24:55.293 Traffic Based Keep ALive: Supported 00:24:55.293 Namespace Granularity: Not Supported 00:24:55.293 SQ Associations: Not Supported 00:24:55.293 UUID List: Not Supported 00:24:55.293 Multi-Domain Subsystem: Not Supported 00:24:55.293 Fixed Capacity Management: Not Supported 00:24:55.293 Variable Capacity Management: Not Supported 00:24:55.293 Delete Endurance Group: Not Supported 00:24:55.293 Delete NVM Set: Not Supported 00:24:55.293 Extended LBA Formats Supported: Not Supported 00:24:55.293 Flexible Data Placement Supported: Not Supported 00:24:55.293 00:24:55.293 Controller Memory Buffer Support 00:24:55.293 ================================ 00:24:55.293 Supported: No 00:24:55.293 00:24:55.293 Persistent Memory Region Support 00:24:55.293 ================================ 00:24:55.293 Supported: No 00:24:55.293 00:24:55.293 Admin Command Set Attributes 00:24:55.293 ============================ 00:24:55.293 Security Send/Receive: Not Supported 00:24:55.293 Format NVM: Not Supported 00:24:55.293 Firmware Activate/Download: Not Supported 00:24:55.293 Namespace Management: Not Supported 00:24:55.293 Device Self-Test: Not Supported 00:24:55.293 Directives: Not Supported 00:24:55.293 NVMe-MI: Not Supported 00:24:55.293 Virtualization Management: Not Supported 00:24:55.293 Doorbell Buffer Config: Not Supported 00:24:55.293 Get LBA Status Capability: Not Supported 00:24:55.293 Command & Feature Lockdown Capability: Not Supported 00:24:55.293 Abort Command Limit: 4 00:24:55.293 Async Event Request Limit: 4 00:24:55.293 Number of Firmware Slots: N/A 00:24:55.293 Firmware Slot 1 Read-Only: N/A 00:24:55.293 Firmware Activation Without Reset: N/A 00:24:55.293 Multiple Update Detection Support: N/A 00:24:55.293 Firmware Update Granularity: No Information Provided 00:24:55.293 Per-Namespace SMART Log: Yes 00:24:55.293 Asymmetric Namespace Access Log Page: Supported 00:24:55.293 ANA Transition Time : 10 sec 00:24:55.293 00:24:55.293 Asymmetric Namespace Access Capabilities 00:24:55.293 ANA Optimized State : Supported 00:24:55.293 ANA Non-Optimized State : Supported 00:24:55.293 ANA Inaccessible State : Supported 00:24:55.293 ANA Persistent Loss State : Supported 00:24:55.293 ANA Change State : Supported 00:24:55.293 ANAGRPID is not changed : No 00:24:55.293 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:55.293 00:24:55.293 ANA Group Identifier Maximum : 128 00:24:55.293 Number of ANA Group Identifiers : 128 00:24:55.293 Max Number of Allowed Namespaces : 1024 00:24:55.293 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:55.293 Command Effects Log Page: Supported 00:24:55.293 Get Log Page Extended Data: Supported 00:24:55.293 Telemetry Log Pages: Not Supported 00:24:55.293 Persistent Event Log Pages: Not Supported 00:24:55.293 Supported Log Pages Log Page: May Support 00:24:55.293 Commands Supported & Effects Log Page: Not Supported 00:24:55.293 Feature Identifiers & Effects Log Page:May Support 00:24:55.293 NVMe-MI Commands & Effects Log Page: May Support 00:24:55.293 Data Area 4 for Telemetry Log: Not Supported 00:24:55.293 Error Log Page Entries Supported: 128 00:24:55.293 Keep Alive: Supported 00:24:55.293 Keep Alive Granularity: 1000 ms 00:24:55.293 00:24:55.293 NVM Command Set Attributes 00:24:55.293 ========================== 00:24:55.293 Submission Queue Entry Size 00:24:55.293 Max: 64 00:24:55.293 Min: 64 00:24:55.293 Completion Queue Entry Size 00:24:55.293 Max: 16 00:24:55.293 Min: 16 00:24:55.293 Number of Namespaces: 1024 00:24:55.293 Compare Command: Not Supported 00:24:55.293 Write Uncorrectable Command: Not Supported 00:24:55.293 Dataset Management Command: Supported 00:24:55.293 Write Zeroes Command: Supported 00:24:55.293 Set Features Save Field: Not Supported 00:24:55.293 Reservations: Not Supported 00:24:55.293 Timestamp: Not Supported 00:24:55.293 Copy: Not Supported 00:24:55.293 Volatile Write Cache: Present 00:24:55.293 Atomic Write Unit (Normal): 1 00:24:55.293 Atomic Write Unit (PFail): 1 00:24:55.293 Atomic Compare & Write Unit: 1 00:24:55.293 Fused Compare & Write: Not Supported 00:24:55.293 Scatter-Gather List 00:24:55.293 SGL Command Set: Supported 00:24:55.293 SGL Keyed: Not Supported 00:24:55.293 SGL Bit Bucket Descriptor: Not Supported 00:24:55.293 SGL Metadata Pointer: Not Supported 00:24:55.293 Oversized SGL: Not Supported 00:24:55.293 SGL Metadata Address: Not Supported 00:24:55.293 SGL Offset: Supported 00:24:55.293 Transport SGL Data Block: Not Supported 00:24:55.293 Replay Protected Memory Block: Not Supported 00:24:55.293 00:24:55.293 Firmware Slot Information 00:24:55.293 ========================= 00:24:55.293 Active slot: 0 00:24:55.293 00:24:55.293 Asymmetric Namespace Access 00:24:55.293 =========================== 00:24:55.293 Change Count : 0 00:24:55.293 Number of ANA Group Descriptors : 1 00:24:55.293 ANA Group Descriptor : 0 00:24:55.293 ANA Group ID : 1 00:24:55.293 Number of NSID Values : 1 00:24:55.293 Change Count : 0 00:24:55.293 ANA State : 1 00:24:55.293 Namespace Identifier : 1 00:24:55.293 00:24:55.293 Commands Supported and Effects 00:24:55.293 ============================== 00:24:55.293 Admin Commands 00:24:55.293 -------------- 00:24:55.293 Get Log Page (02h): Supported 00:24:55.293 Identify (06h): Supported 00:24:55.293 Abort (08h): Supported 00:24:55.293 Set Features (09h): Supported 00:24:55.293 Get Features (0Ah): Supported 00:24:55.293 Asynchronous Event Request (0Ch): Supported 00:24:55.293 Keep Alive (18h): Supported 00:24:55.293 I/O Commands 00:24:55.293 ------------ 00:24:55.293 Flush (00h): Supported 00:24:55.293 Write (01h): Supported LBA-Change 00:24:55.293 Read (02h): Supported 00:24:55.293 Write Zeroes (08h): Supported LBA-Change 00:24:55.293 Dataset Management (09h): Supported 00:24:55.293 00:24:55.293 Error Log 00:24:55.293 ========= 00:24:55.293 Entry: 0 00:24:55.293 Error Count: 0x3 00:24:55.293 Submission Queue Id: 0x0 00:24:55.293 Command Id: 0x5 00:24:55.293 Phase Bit: 0 00:24:55.293 Status Code: 0x2 00:24:55.293 Status Code Type: 0x0 00:24:55.293 Do Not Retry: 1 00:24:55.293 Error Location: 0x28 00:24:55.293 LBA: 0x0 00:24:55.293 Namespace: 0x0 00:24:55.293 Vendor Log Page: 0x0 00:24:55.293 ----------- 00:24:55.293 Entry: 1 00:24:55.293 Error Count: 0x2 00:24:55.293 Submission Queue Id: 0x0 00:24:55.293 Command Id: 0x5 00:24:55.293 Phase Bit: 0 00:24:55.294 Status Code: 0x2 00:24:55.294 Status Code Type: 0x0 00:24:55.294 Do Not Retry: 1 00:24:55.294 Error Location: 0x28 00:24:55.294 LBA: 0x0 00:24:55.294 Namespace: 0x0 00:24:55.294 Vendor Log Page: 0x0 00:24:55.294 ----------- 00:24:55.294 Entry: 2 00:24:55.294 Error Count: 0x1 00:24:55.294 Submission Queue Id: 0x0 00:24:55.294 Command Id: 0x4 00:24:55.294 Phase Bit: 0 00:24:55.294 Status Code: 0x2 00:24:55.294 Status Code Type: 0x0 00:24:55.294 Do Not Retry: 1 00:24:55.294 Error Location: 0x28 00:24:55.294 LBA: 0x0 00:24:55.294 Namespace: 0x0 00:24:55.294 Vendor Log Page: 0x0 00:24:55.294 00:24:55.294 Number of Queues 00:24:55.294 ================ 00:24:55.294 Number of I/O Submission Queues: 128 00:24:55.294 Number of I/O Completion Queues: 128 00:24:55.294 00:24:55.294 ZNS Specific Controller Data 00:24:55.294 ============================ 00:24:55.294 Zone Append Size Limit: 0 00:24:55.294 00:24:55.294 00:24:55.294 Active Namespaces 00:24:55.294 ================= 00:24:55.294 get_feature(0x05) failed 00:24:55.294 Namespace ID:1 00:24:55.294 Command Set Identifier: NVM (00h) 00:24:55.294 Deallocate: Supported 00:24:55.294 Deallocated/Unwritten Error: Not Supported 00:24:55.294 Deallocated Read Value: Unknown 00:24:55.294 Deallocate in Write Zeroes: Not Supported 00:24:55.294 Deallocated Guard Field: 0xFFFF 00:24:55.294 Flush: Supported 00:24:55.294 Reservation: Not Supported 00:24:55.294 Namespace Sharing Capabilities: Multiple Controllers 00:24:55.294 Size (in LBAs): 4194304 (2GiB) 00:24:55.294 Capacity (in LBAs): 4194304 (2GiB) 00:24:55.294 Utilization (in LBAs): 4194304 (2GiB) 00:24:55.294 UUID: e286c72c-760c-41df-97dc-9c101fb68d07 00:24:55.294 Thin Provisioning: Not Supported 00:24:55.294 Per-NS Atomic Units: Yes 00:24:55.294 Atomic Boundary Size (Normal): 0 00:24:55.294 Atomic Boundary Size (PFail): 0 00:24:55.294 Atomic Boundary Offset: 0 00:24:55.294 NGUID/EUI64 Never Reused: No 00:24:55.294 ANA group ID: 1 00:24:55.294 Namespace Write Protected: No 00:24:55.294 Number of LBA Formats: 1 00:24:55.294 Current LBA Format: LBA Format #00 00:24:55.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:55.294 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.294 rmmod nvme_tcp 00:24:55.294 rmmod nvme_fabrics 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.294 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.553 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.553 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.553 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.553 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.553 15:17:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:57.457 15:17:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:59.991 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:00.559 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:00.559 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:01.503 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:01.503 00:25:01.503 real 0m17.173s 00:25:01.503 user 0m4.569s 00:25:01.503 sys 0m8.881s 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.503 ************************************ 00:25:01.503 END TEST nvmf_identify_kernel_target 00:25:01.503 ************************************ 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.503 15:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.774 ************************************ 00:25:01.774 START TEST nvmf_auth_host 00:25:01.774 ************************************ 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:01.774 * Looking for test storage... 00:25:01.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:01.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.774 --rc genhtml_branch_coverage=1 00:25:01.774 --rc genhtml_function_coverage=1 00:25:01.774 --rc genhtml_legend=1 00:25:01.774 --rc geninfo_all_blocks=1 00:25:01.774 --rc geninfo_unexecuted_blocks=1 00:25:01.774 00:25:01.774 ' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:01.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.774 --rc genhtml_branch_coverage=1 00:25:01.774 --rc genhtml_function_coverage=1 00:25:01.774 --rc genhtml_legend=1 00:25:01.774 --rc geninfo_all_blocks=1 00:25:01.774 --rc geninfo_unexecuted_blocks=1 00:25:01.774 00:25:01.774 ' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:01.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.774 --rc genhtml_branch_coverage=1 00:25:01.774 --rc genhtml_function_coverage=1 00:25:01.774 --rc genhtml_legend=1 00:25:01.774 --rc geninfo_all_blocks=1 00:25:01.774 --rc geninfo_unexecuted_blocks=1 00:25:01.774 00:25:01.774 ' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:01.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.774 --rc genhtml_branch_coverage=1 00:25:01.774 --rc genhtml_function_coverage=1 00:25:01.774 --rc genhtml_legend=1 00:25:01.774 --rc geninfo_all_blocks=1 00:25:01.774 --rc geninfo_unexecuted_blocks=1 00:25:01.774 00:25:01.774 ' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.774 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.775 15:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.344 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.345 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.345 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.345 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.345 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:25:08.345 00:25:08.345 --- 10.0.0.2 ping statistics --- 00:25:08.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.345 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:08.345 00:25:08.345 --- 10.0.0.1 ping statistics --- 00:25:08.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.345 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1559042 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1559042 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1559042 ']' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=63f9bb5a903630bfda4958775e2148ef 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.00Z 00:25:08.345 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 63f9bb5a903630bfda4958775e2148ef 0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 63f9bb5a903630bfda4958775e2148ef 0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=63f9bb5a903630bfda4958775e2148ef 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.00Z 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.00Z 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.00Z 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=663b518981060c114e03db79d54585c8b7935ebe0d9f504b657434ac64522030 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.weS 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 663b518981060c114e03db79d54585c8b7935ebe0d9f504b657434ac64522030 3 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 663b518981060c114e03db79d54585c8b7935ebe0d9f504b657434ac64522030 3 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=663b518981060c114e03db79d54585c8b7935ebe0d9f504b657434ac64522030 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.weS 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.weS 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.weS 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=68f276b5363f0ce1915413a1350319e0b93677d2cab37e9c 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0Ph 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 68f276b5363f0ce1915413a1350319e0b93677d2cab37e9c 0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 68f276b5363f0ce1915413a1350319e0b93677d2cab37e9c 0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=68f276b5363f0ce1915413a1350319e0b93677d2cab37e9c 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0Ph 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0Ph 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.0Ph 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1dd0d14f89b9eece490e867638c3f673489d06760c6cc9ef 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.19a 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1dd0d14f89b9eece490e867638c3f673489d06760c6cc9ef 2 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1dd0d14f89b9eece490e867638c3f673489d06760c6cc9ef 2 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1dd0d14f89b9eece490e867638c3f673489d06760c6cc9ef 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.19a 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.19a 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.19a 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:08.346 15:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2cc60d324abce924d7d6a0f098fe47e 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.O0M 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2cc60d324abce924d7d6a0f098fe47e 1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2cc60d324abce924d7d6a0f098fe47e 1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2cc60d324abce924d7d6a0f098fe47e 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.O0M 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.O0M 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.O0M 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c13a82705ae87b7a909b52f3e80f3a45 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H3f 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c13a82705ae87b7a909b52f3e80f3a45 1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c13a82705ae87b7a909b52f3e80f3a45 1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c13a82705ae87b7a909b52f3e80f3a45 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H3f 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H3f 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.H3f 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:08.346 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:08.347 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43243a9aebd7c3b2f43bfc2b946c10b351a4af7c20728551 00:25:08.347 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rjN 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43243a9aebd7c3b2f43bfc2b946c10b351a4af7c20728551 2 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43243a9aebd7c3b2f43bfc2b946c10b351a4af7c20728551 2 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43243a9aebd7c3b2f43bfc2b946c10b351a4af7c20728551 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rjN 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rjN 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rjN 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3567d77e49244093d03f65783f74616 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kW9 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3567d77e49244093d03f65783f74616 0 00:25:08.605 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3567d77e49244093d03f65783f74616 0 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3567d77e49244093d03f65783f74616 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kW9 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kW9 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kW9 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c147617acfb7c05393eeeedcbc1268385b9d9d5689e74bac8af1dc188396aff9 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.c5e 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c147617acfb7c05393eeeedcbc1268385b9d9d5689e74bac8af1dc188396aff9 3 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c147617acfb7c05393eeeedcbc1268385b9d9d5689e74bac8af1dc188396aff9 3 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c147617acfb7c05393eeeedcbc1268385b9d9d5689e74bac8af1dc188396aff9 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.c5e 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.c5e 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.c5e 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1559042 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1559042 ']' 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.606 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.00Z 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.weS ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.weS 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.0Ph 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.19a ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.19a 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.O0M 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.H3f ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.H3f 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rjN 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kW9 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kW9 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.c5e 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:08.865 15:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:11.395 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:11.653 Waiting for block devices as requested 00:25:11.911 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:11.911 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:11.911 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.169 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.169 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.169 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.169 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.426 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:12.426 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:12.426 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.684 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.684 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.684 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.684 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.945 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.945 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:12.945 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:13.631 No valid GPT data, bailing 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:13.631 No valid GPT data, bailing 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:13.631 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:13.890 00:25:13.890 Discovery Log Number of Records 2, Generation counter 2 00:25:13.890 =====Discovery Log Entry 0====== 00:25:13.890 trtype: tcp 00:25:13.890 adrfam: ipv4 00:25:13.890 subtype: current discovery subsystem 00:25:13.890 treq: not specified, sq flow control disable supported 00:25:13.890 portid: 1 00:25:13.890 trsvcid: 4420 00:25:13.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:13.890 traddr: 10.0.0.1 00:25:13.890 eflags: none 00:25:13.890 sectype: none 00:25:13.890 =====Discovery Log Entry 1====== 00:25:13.890 trtype: tcp 00:25:13.890 adrfam: ipv4 00:25:13.890 subtype: nvme subsystem 00:25:13.890 treq: not specified, sq flow control disable supported 00:25:13.890 portid: 1 00:25:13.890 trsvcid: 4420 00:25:13.890 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:13.890 traddr: 10.0.0.1 00:25:13.890 eflags: none 00:25:13.890 sectype: none 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.890 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 nvme0n1 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.149 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 nvme0n1 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.150 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.409 15:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 nvme0n1 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.409 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.668 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.668 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.668 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:14.668 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.668 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.669 nvme0n1 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.669 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.928 nvme0n1 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.928 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.929 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.188 nvme0n1 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.188 15:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.447 nvme0n1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.447 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.706 nvme0n1 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:15.706 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.707 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.966 nvme0n1 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.966 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.225 nvme0n1 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.225 15:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 nvme0n1 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.743 nvme0n1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.743 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.002 nvme0n1 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.002 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.003 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.262 15:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.262 nvme0n1 00:25:17.262 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.262 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.262 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.262 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.262 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.521 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.780 nvme0n1 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.780 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.039 nvme0n1 00:25:18.039 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.039 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.039 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.039 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.039 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.040 15:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.608 nvme0n1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.608 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 nvme0n1 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.867 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.124 15:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 nvme0n1 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.383 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 nvme0n1 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.950 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.951 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.209 nvme0n1 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.209 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.210 15:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.210 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.210 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.210 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.210 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.778 nvme0n1 00:25:20.778 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.778 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.778 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.778 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.778 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.037 15:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 nvme0n1 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:21.604 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.605 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.173 nvme0n1 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.173 15:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.740 nvme0n1 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.740 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.999 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.000 15:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.568 nvme0n1 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.568 nvme0n1 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.568 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 nvme0n1 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.087 nvme0n1 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.087 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.347 15:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.347 nvme0n1 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.347 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.606 nvme0n1 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.606 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.865 nvme0n1 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:24.865 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.866 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 nvme0n1 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.125 15:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.384 nvme0n1 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.384 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.643 nvme0n1 00:25:25.643 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.643 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.643 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.643 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.643 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.644 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.903 nvme0n1 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.903 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.162 nvme0n1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.162 15:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.420 nvme0n1 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.420 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.677 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.678 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.936 nvme0n1 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.936 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.195 nvme0n1 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.195 15:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.453 nvme0n1 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.453 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.454 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 nvme0n1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.020 15:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.279 nvme0n1 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.279 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.537 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.538 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 nvme0n1 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.796 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.363 nvme0n1 00:25:29.363 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.363 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.363 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.363 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.363 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.364 15:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.622 nvme0n1 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:29.622 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.623 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.881 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.447 nvme0n1 00:25:30.447 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.447 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.447 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.447 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.447 15:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.447 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.448 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.014 nvme0n1 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.014 15:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.581 nvme0n1 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.581 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.582 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.582 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.582 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.148 nvme0n1 00:25:32.148 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.148 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.148 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.148 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.148 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.407 15:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.974 nvme0n1 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.974 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.975 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 nvme0n1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.234 15:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.234 nvme0n1 00:25:33.234 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.234 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.234 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.234 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.234 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.495 nvme0n1 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.495 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 nvme0n1 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.758 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.759 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 nvme0n1 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.017 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.018 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.018 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.018 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.276 nvme0n1 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.276 15:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.276 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.557 nvme0n1 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.557 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.816 nvme0n1 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.816 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.074 nvme0n1 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.074 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.075 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.333 nvme0n1 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.333 15:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.333 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.592 nvme0n1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.592 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.850 nvme0n1 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.850 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.108 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.366 nvme0n1 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:36.367 15:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.367 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.625 nvme0n1 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.626 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.884 nvme0n1 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.884 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.885 15:18:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.451 nvme0n1 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.451 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.452 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.709 nvme0n1 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.709 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.967 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.968 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.226 nvme0n1 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.226 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.227 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.227 15:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.793 nvme0n1 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.793 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.052 nvme0n1 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.052 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjNmOWJiNWE5MDM2MzBiZmRhNDk1ODc3NWUyMTQ4ZWap037a: 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYzYjUxODk4MTA2MGMxMTRlMDNkYjc5ZDU0NTg1YzhiNzkzNWViZTBkOWY1MDRiNjU3NDM0YWM2NDUyMjAzMA/Wi/A=: 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.310 15:18:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 nvme0n1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.877 15:18:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.444 nvme0n1 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.444 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.445 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.010 nvme0n1 00:25:41.010 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.010 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.010 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.011 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.011 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDMyNDNhOWFlYmQ3YzNiMmY0M2JmYzJiOTQ2YzEwYjM1MWE0YWY3YzIwNzI4NTUxnrXdXA==: 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: ]] 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM1NjdkNzdlNDkyNDQwOTNkMDNmNjU3ODNmNzQ2MTaMqxIk: 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.268 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.269 15:18:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.835 nvme0n1 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzE0NzYxN2FjZmI3YzA1MzkzZWVlZWRjYmMxMjY4Mzg1YjlkOWQ1Njg5ZTc0YmFjOGFmMWRjMTg4Mzk2YWZmOejRyzg=: 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.835 15:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.403 nvme0n1 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.403 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.404 request: 00:25:42.404 { 00:25:42.404 "name": "nvme0", 00:25:42.404 "trtype": "tcp", 00:25:42.404 "traddr": "10.0.0.1", 00:25:42.404 "adrfam": "ipv4", 00:25:42.404 "trsvcid": "4420", 00:25:42.404 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.404 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.404 "prchk_reftag": false, 00:25:42.404 "prchk_guard": false, 00:25:42.404 "hdgst": false, 00:25:42.404 "ddgst": false, 00:25:42.404 "allow_unrecognized_csi": false, 00:25:42.404 "method": "bdev_nvme_attach_controller", 00:25:42.404 "req_id": 1 00:25:42.404 } 00:25:42.404 Got JSON-RPC error response 00:25:42.404 response: 00:25:42.404 { 00:25:42.404 "code": -5, 00:25:42.404 "message": "Input/output error" 00:25:42.404 } 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.404 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.662 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.662 request: 00:25:42.662 { 00:25:42.662 "name": "nvme0", 00:25:42.662 "trtype": "tcp", 00:25:42.662 "traddr": "10.0.0.1", 00:25:42.662 "adrfam": "ipv4", 00:25:42.662 "trsvcid": "4420", 00:25:42.662 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.662 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.662 "prchk_reftag": false, 00:25:42.662 "prchk_guard": false, 00:25:42.662 "hdgst": false, 00:25:42.662 "ddgst": false, 00:25:42.662 "dhchap_key": "key2", 00:25:42.662 "allow_unrecognized_csi": false, 00:25:42.662 "method": "bdev_nvme_attach_controller", 00:25:42.662 "req_id": 1 00:25:42.662 } 00:25:42.662 Got JSON-RPC error response 00:25:42.662 response: 00:25:42.662 { 00:25:42.662 "code": -5, 00:25:42.662 "message": "Input/output error" 00:25:42.662 } 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.663 request: 00:25:42.663 { 00:25:42.663 "name": "nvme0", 00:25:42.663 "trtype": "tcp", 00:25:42.663 "traddr": "10.0.0.1", 00:25:42.663 "adrfam": "ipv4", 00:25:42.663 "trsvcid": "4420", 00:25:42.663 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.663 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.663 "prchk_reftag": false, 00:25:42.663 "prchk_guard": false, 00:25:42.663 "hdgst": false, 00:25:42.663 "ddgst": false, 00:25:42.663 "dhchap_key": "key1", 00:25:42.663 "dhchap_ctrlr_key": "ckey2", 00:25:42.663 "allow_unrecognized_csi": false, 00:25:42.663 "method": "bdev_nvme_attach_controller", 00:25:42.663 "req_id": 1 00:25:42.663 } 00:25:42.663 Got JSON-RPC error response 00:25:42.663 response: 00:25:42.663 { 00:25:42.663 "code": -5, 00:25:42.663 "message": "Input/output error" 00:25:42.663 } 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.663 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.921 nvme0n1 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.921 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.922 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:43.179 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.180 request: 00:25:43.180 { 00:25:43.180 "name": "nvme0", 00:25:43.180 "dhchap_key": "key1", 00:25:43.180 "dhchap_ctrlr_key": "ckey2", 00:25:43.180 "method": "bdev_nvme_set_keys", 00:25:43.180 "req_id": 1 00:25:43.180 } 00:25:43.180 Got JSON-RPC error response 00:25:43.180 response: 00:25:43.180 { 00:25:43.180 "code": -13, 00:25:43.180 "message": "Permission denied" 00:25:43.180 } 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:43.180 15:18:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:44.119 15:18:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:45.493 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmMjc2YjUzNjNmMGNlMTkxNTQxM2ExMzUwMzE5ZTBiOTM2NzdkMmNhYjM3ZTljsW9Oag==: 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: ]] 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRkMGQxNGY4OWI5ZWVjZTQ5MGU4Njc2MzhjM2Y2NzM0ODlkMDY3NjBjNmNjOWVmXv+0Mw==: 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.494 15:18:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.494 nvme0n1 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJjYzYwZDMyNGFiY2U5MjRkN2Q2YTBmMDk4ZmU0N2W9Pwkj: 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: ]] 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzYTgyNzA1YWU4N2I3YTkwOWI1MmYzZTgwZjNhNDWsOpbo: 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.494 request: 00:25:45.494 { 00:25:45.494 "name": "nvme0", 00:25:45.494 "dhchap_key": "key2", 00:25:45.494 "dhchap_ctrlr_key": "ckey1", 00:25:45.494 "method": "bdev_nvme_set_keys", 00:25:45.494 "req_id": 1 00:25:45.494 } 00:25:45.494 Got JSON-RPC error response 00:25:45.494 response: 00:25:45.494 { 00:25:45.494 "code": -13, 00:25:45.494 "message": "Permission denied" 00:25:45.494 } 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:45.494 15:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:46.430 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.430 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:46.430 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.430 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.430 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.688 rmmod nvme_tcp 00:25:46.688 rmmod nvme_fabrics 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1559042 ']' 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1559042 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1559042 ']' 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1559042 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1559042 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1559042' 00:25:46.688 killing process with pid 1559042 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1559042 00:25:46.688 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1559042 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.947 15:18:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:48.850 15:18:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:51.402 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:52.084 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:52.084 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.020 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:53.020 15:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.00Z /tmp/spdk.key-null.0Ph /tmp/spdk.key-sha256.O0M /tmp/spdk.key-sha384.rjN /tmp/spdk.key-sha512.c5e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:53.020 15:18:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:55.552 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:55.811 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:55.811 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:55.811 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:56.069 00:25:56.069 real 0m54.382s 00:25:56.069 user 0m49.150s 00:25:56.069 sys 0m12.901s 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 ************************************ 00:25:56.069 END TEST nvmf_auth_host 00:25:56.069 ************************************ 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 ************************************ 00:25:56.069 START TEST nvmf_digest 00:25:56.069 ************************************ 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:56.069 * Looking for test storage... 00:25:56.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:56.069 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:56.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.329 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.330 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:56.330 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:56.330 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:56.330 15:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.896 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:02.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:02.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:02.897 Found net devices under 0000:af:00.0: cvl_0_0 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:02.897 Found net devices under 0000:af:00.1: cvl_0_1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:26:02.897 00:26:02.897 --- 10.0.0.2 ping statistics --- 00:26:02.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.897 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:26:02.897 00:26:02.897 --- 10.0.0.1 ping statistics --- 00:26:02.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.897 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.897 ************************************ 00:26:02.897 START TEST nvmf_digest_clean 00:26:02.897 ************************************ 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1572840 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1572840 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1572840 ']' 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.897 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.898 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.898 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.898 15:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.898 [2024-12-09 15:19:04.002644] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:02.898 [2024-12-09 15:19:04.002684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.898 [2024-12-09 15:19:04.081421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.898 [2024-12-09 15:19:04.120648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.898 [2024-12-09 15:19:04.120682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.898 [2024-12-09 15:19:04.120690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.898 [2024-12-09 15:19:04.120696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.898 [2024-12-09 15:19:04.120702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.898 [2024-12-09 15:19:04.121237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.898 null0 00:26:02.898 [2024-12-09 15:19:04.277170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.898 [2024-12-09 15:19:04.301357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1572862 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1572862 /var/tmp/bperf.sock 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1572862 ']' 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.898 [2024-12-09 15:19:04.351686] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:02.898 [2024-12-09 15:19:04.351724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572862 ] 00:26:02.898 [2024-12-09 15:19:04.425499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.898 [2024-12-09 15:19:04.466123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:02.898 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:03.156 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.156 15:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.414 nvme0n1 00:26:03.414 15:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:03.414 15:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.414 Running I/O for 2 seconds... 00:26:05.720 25592.00 IOPS, 99.97 MiB/s [2024-12-09T14:19:07.515Z] 25222.50 IOPS, 98.53 MiB/s 00:26:05.720 Latency(us) 00:26:05.720 [2024-12-09T14:19:07.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.720 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:05.720 nvme0n1 : 2.00 25224.01 98.53 0.00 0.00 5068.30 2715.06 11983.73 00:26:05.720 [2024-12-09T14:19:07.515Z] =================================================================================================================== 00:26:05.720 [2024-12-09T14:19:07.515Z] Total : 25224.01 98.53 0.00 0.00 5068.30 2715.06 11983.73 00:26:05.720 { 00:26:05.720 "results": [ 00:26:05.720 { 00:26:05.720 "job": "nvme0n1", 00:26:05.720 "core_mask": "0x2", 00:26:05.720 "workload": "randread", 00:26:05.720 "status": "finished", 00:26:05.720 "queue_depth": 128, 00:26:05.720 "io_size": 4096, 00:26:05.720 "runtime": 2.004955, 00:26:05.720 "iops": 25224.007521365817, 00:26:05.720 "mibps": 98.53127938033522, 00:26:05.720 "io_failed": 0, 00:26:05.720 "io_timeout": 0, 00:26:05.720 "avg_latency_us": 5068.298631624441, 00:26:05.720 "min_latency_us": 2715.062857142857, 00:26:05.720 "max_latency_us": 11983.725714285714 00:26:05.720 } 00:26:05.720 ], 00:26:05.720 "core_count": 1 00:26:05.720 } 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.720 | select(.opcode=="crc32c") 00:26:05.720 | "\(.module_name) \(.executed)"' 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1572862 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1572862 ']' 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1572862 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572862 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572862' 00:26:05.720 killing process with pid 1572862 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1572862 00:26:05.720 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.720 00:26:05.720 Latency(us) 00:26:05.720 [2024-12-09T14:19:07.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.720 [2024-12-09T14:19:07.515Z] =================================================================================================================== 00:26:05.720 [2024-12-09T14:19:07.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.720 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1572862 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1573343 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1573343 /var/tmp/bperf.sock 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1573343 ']' 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.980 [2024-12-09 15:19:07.612534] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:05.980 [2024-12-09 15:19:07.612584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573343 ] 00:26:05.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.980 Zero copy mechanism will not be used. 00:26:05.980 [2024-12-09 15:19:07.688640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.980 [2024-12-09 15:19:07.729253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:05.980 15:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.545 15:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.545 15:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.803 nvme0n1 00:26:06.803 15:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.803 15:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.803 Zero copy mechanism will not be used. 00:26:06.803 Running I/O for 2 seconds... 00:26:09.110 5296.00 IOPS, 662.00 MiB/s [2024-12-09T14:19:10.905Z] 5686.00 IOPS, 710.75 MiB/s 00:26:09.110 Latency(us) 00:26:09.110 [2024-12-09T14:19:10.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.110 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:09.110 nvme0n1 : 2.00 5685.59 710.70 0.00 0.00 2811.49 436.91 5024.43 00:26:09.110 [2024-12-09T14:19:10.905Z] =================================================================================================================== 00:26:09.110 [2024-12-09T14:19:10.905Z] Total : 5685.59 710.70 0.00 0.00 2811.49 436.91 5024.43 00:26:09.110 { 00:26:09.110 "results": [ 00:26:09.110 { 00:26:09.110 "job": "nvme0n1", 00:26:09.110 "core_mask": "0x2", 00:26:09.110 "workload": "randread", 00:26:09.110 "status": "finished", 00:26:09.110 "queue_depth": 16, 00:26:09.110 "io_size": 131072, 00:26:09.110 "runtime": 2.002959, 00:26:09.110 "iops": 5685.588172299083, 00:26:09.110 "mibps": 710.6985215373854, 00:26:09.110 "io_failed": 0, 00:26:09.110 "io_timeout": 0, 00:26:09.110 "avg_latency_us": 2811.4876826065865, 00:26:09.110 "min_latency_us": 436.9066666666667, 00:26:09.110 "max_latency_us": 5024.426666666666 00:26:09.110 } 00:26:09.110 ], 00:26:09.110 "core_count": 1 00:26:09.110 } 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.110 | select(.opcode=="crc32c") 00:26:09.110 | "\(.module_name) \(.executed)"' 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1573343 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1573343 ']' 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1573343 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573343 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573343' 00:26:09.110 killing process with pid 1573343 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1573343 00:26:09.110 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.110 00:26:09.110 Latency(us) 00:26:09.110 [2024-12-09T14:19:10.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.110 [2024-12-09T14:19:10.905Z] =================================================================================================================== 00:26:09.110 [2024-12-09T14:19:10.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.110 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1573343 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1574004 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1574004 /var/tmp/bperf.sock 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1574004 ']' 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.369 15:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.369 [2024-12-09 15:19:11.012354] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:09.369 [2024-12-09 15:19:11.012406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574004 ] 00:26:09.369 [2024-12-09 15:19:11.088675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.369 [2024-12-09 15:19:11.129267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.369 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.369 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:09.369 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.369 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.369 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:09.626 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.627 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.193 nvme0n1 00:26:10.193 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.193 15:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.193 Running I/O for 2 seconds... 00:26:12.498 28345.00 IOPS, 110.72 MiB/s [2024-12-09T14:19:14.293Z] 28480.50 IOPS, 111.25 MiB/s 00:26:12.498 Latency(us) 00:26:12.498 [2024-12-09T14:19:14.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.498 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:12.498 nvme0n1 : 2.00 28496.26 111.31 0.00 0.00 4487.55 2231.34 8613.30 00:26:12.498 [2024-12-09T14:19:14.293Z] =================================================================================================================== 00:26:12.498 [2024-12-09T14:19:14.293Z] Total : 28496.26 111.31 0.00 0.00 4487.55 2231.34 8613.30 00:26:12.498 { 00:26:12.498 "results": [ 00:26:12.498 { 00:26:12.498 "job": "nvme0n1", 00:26:12.498 "core_mask": "0x2", 00:26:12.498 "workload": "randwrite", 00:26:12.498 "status": "finished", 00:26:12.498 "queue_depth": 128, 00:26:12.498 "io_size": 4096, 00:26:12.498 "runtime": 2.003386, 00:26:12.498 "iops": 28496.255838864803, 00:26:12.498 "mibps": 111.31349937056564, 00:26:12.498 "io_failed": 0, 00:26:12.498 "io_timeout": 0, 00:26:12.498 "avg_latency_us": 4487.551373169213, 00:26:12.498 "min_latency_us": 2231.344761904762, 00:26:12.498 "max_latency_us": 8613.302857142857 00:26:12.498 } 00:26:12.498 ], 00:26:12.498 "core_count": 1 00:26:12.498 } 00:26:12.498 15:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.498 15:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.498 15:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.498 15:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.498 15:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.498 | select(.opcode=="crc32c") 00:26:12.498 | "\(.module_name) \(.executed)"' 00:26:12.498 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.498 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.498 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.498 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1574004 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1574004 ']' 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1574004 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574004 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574004' 00:26:12.499 killing process with pid 1574004 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1574004 00:26:12.499 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.499 00:26:12.499 Latency(us) 00:26:12.499 [2024-12-09T14:19:14.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.499 [2024-12-09T14:19:14.294Z] =================================================================================================================== 00:26:12.499 [2024-12-09T14:19:14.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.499 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1574004 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1574478 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1574478 /var/tmp/bperf.sock 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1574478 ']' 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:12.757 [2024-12-09 15:19:14.368984] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:12.757 [2024-12-09 15:19:14.369032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574478 ] 00:26:12.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.757 Zero copy mechanism will not be used. 00:26:12.757 [2024-12-09 15:19:14.443721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.757 [2024-12-09 15:19:14.484211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:12.757 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:13.015 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.015 15:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:13.579 nvme0n1 00:26:13.579 15:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:13.579 15:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.579 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:13.579 Zero copy mechanism will not be used. 00:26:13.579 Running I/O for 2 seconds... 00:26:15.885 6144.00 IOPS, 768.00 MiB/s [2024-12-09T14:19:17.680Z] 6222.00 IOPS, 777.75 MiB/s 00:26:15.885 Latency(us) 00:26:15.885 [2024-12-09T14:19:17.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.885 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:15.885 nvme0n1 : 2.00 6217.13 777.14 0.00 0.00 2568.72 1958.28 6179.11 00:26:15.885 [2024-12-09T14:19:17.680Z] =================================================================================================================== 00:26:15.885 [2024-12-09T14:19:17.680Z] Total : 6217.13 777.14 0.00 0.00 2568.72 1958.28 6179.11 00:26:15.885 { 00:26:15.886 "results": [ 00:26:15.886 { 00:26:15.886 "job": "nvme0n1", 00:26:15.886 "core_mask": "0x2", 00:26:15.886 "workload": "randwrite", 00:26:15.886 "status": "finished", 00:26:15.886 "queue_depth": 16, 00:26:15.886 "io_size": 131072, 00:26:15.886 "runtime": 2.004783, 00:26:15.886 "iops": 6217.131729468975, 00:26:15.886 "mibps": 777.1414661836219, 00:26:15.886 "io_failed": 0, 00:26:15.886 "io_timeout": 0, 00:26:15.886 "avg_latency_us": 2568.720806895287, 00:26:15.886 "min_latency_us": 1958.2780952380951, 00:26:15.886 "max_latency_us": 6179.108571428572 00:26:15.886 } 00:26:15.886 ], 00:26:15.886 "core_count": 1 00:26:15.886 } 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:15.886 | select(.opcode=="crc32c") 00:26:15.886 | "\(.module_name) \(.executed)"' 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1574478 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1574478 ']' 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1574478 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574478 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574478' 00:26:15.886 killing process with pid 1574478 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1574478 00:26:15.886 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.886 00:26:15.886 Latency(us) 00:26:15.886 [2024-12-09T14:19:17.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.886 [2024-12-09T14:19:17.681Z] =================================================================================================================== 00:26:15.886 [2024-12-09T14:19:17.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.886 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1574478 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1572840 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1572840 ']' 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1572840 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572840 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572840' 00:26:16.144 killing process with pid 1572840 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1572840 00:26:16.144 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1572840 00:26:16.402 00:26:16.402 real 0m14.041s 00:26:16.402 user 0m26.969s 00:26:16.402 sys 0m4.483s 00:26:16.402 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.402 15:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.402 ************************************ 00:26:16.402 END TEST nvmf_digest_clean 00:26:16.402 ************************************ 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:16.402 ************************************ 00:26:16.402 START TEST nvmf_digest_error 00:26:16.402 ************************************ 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1575176 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1575176 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1575176 ']' 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.402 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.402 [2024-12-09 15:19:18.117531] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:16.402 [2024-12-09 15:19:18.117573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.402 [2024-12-09 15:19:18.195349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.660 [2024-12-09 15:19:18.236604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.660 [2024-12-09 15:19:18.236652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.660 [2024-12-09 15:19:18.236660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.660 [2024-12-09 15:19:18.236666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.660 [2024-12-09 15:19:18.236671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.660 [2024-12-09 15:19:18.237201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.660 [2024-12-09 15:19:18.309663] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.660 null0 00:26:16.660 [2024-12-09 15:19:18.405816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.660 [2024-12-09 15:19:18.430004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575210 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575210 /var/tmp/bperf.sock 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1575210 ']' 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.660 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.917 [2024-12-09 15:19:18.481786] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:16.917 [2024-12-09 15:19:18.481825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575210 ] 00:26:16.917 [2024-12-09 15:19:18.540645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.917 [2024-12-09 15:19:18.581130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.917 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.917 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:16.917 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.918 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.175 15:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.754 nvme0n1 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:17.754 15:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.754 Running I/O for 2 seconds... 00:26:17.755 [2024-12-09 15:19:19.380190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.380230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.380244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.390994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.391028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.401066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.401087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.401095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.410235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.410265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.419588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.419608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.419616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.429011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.429031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.429039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.440308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.440328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.440336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.452435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.452455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.452464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.460905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.460925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.460934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.470734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.470754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.470765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.481042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.481063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.481071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.493224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.493244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.493252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.506584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.506605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.506613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.514488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.514508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.514516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.526663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.526683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.526691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.537842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.537862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.537870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.755 [2024-12-09 15:19:19.547087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:17.755 [2024-12-09 15:19:19.547110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.755 [2024-12-09 15:19:19.547119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.013 [2024-12-09 15:19:19.555536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.013 [2024-12-09 15:19:19.555558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.013 [2024-12-09 15:19:19.555566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.013 [2024-12-09 15:19:19.564997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.013 [2024-12-09 15:19:19.565020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.013 [2024-12-09 15:19:19.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.013 [2024-12-09 15:19:19.575160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.013 [2024-12-09 15:19:19.575180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.575188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.583827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.583847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.583855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.593051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.593072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.593080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.601448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.601468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.601477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.613839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.613858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.613866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.622552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.622573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.622581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.635243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.635263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.635270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.645302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.645322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.645333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.653278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.653297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.653305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.663430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.663449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.663457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.674048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.674068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.674076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.682450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.682470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.695064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.695083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.695091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.706888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.706907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.706915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.719596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.719617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.719625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.730620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.730646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.739274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.739297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.739313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.750746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.750766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.750774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.759565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.759593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.772269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.772289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.772296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.780363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.780382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.780390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.792149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.792169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.792177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.014 [2024-12-09 15:19:19.804343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.014 [2024-12-09 15:19:19.804366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.014 [2024-12-09 15:19:19.804374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.815573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.815596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.815604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.822993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.823015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.823023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.833078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.833099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.833107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.842883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.842903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.842911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.854763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.854783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.854791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.865398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.865418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.865426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.877564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.877584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.877591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.889936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.889955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.889963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.900200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.900226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.900235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.908769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.908789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.908797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.920912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.920932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.920944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.929250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.929270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.929278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.940351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.940370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.940378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.951013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.951033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.951041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.962555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.962575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.962582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.273 [2024-12-09 15:19:19.974697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.273 [2024-12-09 15:19:19.974715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.273 [2024-12-09 15:19:19.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:19.983724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:19.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:19.983751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:19.996099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:19.996119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:19.996127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.004791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.004812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.016869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.016892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.016900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.026495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.026515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.026523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.036668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.036696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.045580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.045601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.055321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.055342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.055350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.274 [2024-12-09 15:19:20.065368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.274 [2024-12-09 15:19:20.065390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.274 [2024-12-09 15:19:20.065399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.532 [2024-12-09 15:19:20.074901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.532 [2024-12-09 15:19:20.074923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.532 [2024-12-09 15:19:20.074932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.532 [2024-12-09 15:19:20.083785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.532 [2024-12-09 15:19:20.083806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.532 [2024-12-09 15:19:20.083814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.532 [2024-12-09 15:19:20.096101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.532 [2024-12-09 15:19:20.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.096130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.108318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.108339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.108347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.120820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.120841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.132141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.132162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.132170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.141151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.141178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.150009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.150028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.150036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.160405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.160426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.160433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.171046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.171065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.171073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.179459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.179479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.179487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.191745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.191765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.191777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.199803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.199823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.199830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.211595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.211615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.211623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.220623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.220642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.220651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.229842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.229862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.229869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.239644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.239665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.239673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.248244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.248264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.248272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.259119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.259141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.259149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.271885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.271914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.283843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.283868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.283876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.292611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.292631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.292639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.303411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.303432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.303440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.314254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.314273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.314282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.533 [2024-12-09 15:19:20.324508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.533 [2024-12-09 15:19:20.324535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.533 [2024-12-09 15:19:20.324549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.791 [2024-12-09 15:19:20.333410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.791 [2024-12-09 15:19:20.333432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.791 [2024-12-09 15:19:20.333442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.791 [2024-12-09 15:19:20.345167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.791 [2024-12-09 15:19:20.345189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.791 [2024-12-09 15:19:20.345198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.791 [2024-12-09 15:19:20.357617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.791 [2024-12-09 15:19:20.357638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.791 [2024-12-09 15:19:20.357646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.791 24327.00 IOPS, 95.03 MiB/s [2024-12-09T14:19:20.587Z] [2024-12-09 15:19:20.371718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.371739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.371753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.384431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.384453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.384461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.392764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.392785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.392793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.403375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.403395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.403403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.415346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.415370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.415378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.425595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.425617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.425625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.434289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.434311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.434319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.443979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.453136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.453157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.462852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.462878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.462886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.473619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.473639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.473647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.483483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.483504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.483512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.492153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.492173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.492182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.504079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.504100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.515638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.515659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.515668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.524034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.524055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.534784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.534804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.534813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.544012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.544032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.544040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.553263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.553292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.562534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.562555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.562563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.571212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.571238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.792 [2024-12-09 15:19:20.583099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:18.792 [2024-12-09 15:19:20.583122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.792 [2024-12-09 15:19:20.583131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.590891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.590915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.590924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.601213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.601243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.601252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.610179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.610200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.610208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.621751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.621773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.621781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.631220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.631241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.631253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.640936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.640957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.640966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.651185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.651206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.651215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.661844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.661864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.661872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.670432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.670452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.670460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.680092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.680119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.689084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.689103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.689111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.698609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.698629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.698637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.709699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.709719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.718934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.718959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.718967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.727299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.727320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.727327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.739372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.739393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.739401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.751598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.751618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.751626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.759571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.759600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.770550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.770569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.770577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.051 [2024-12-09 15:19:20.779900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.051 [2024-12-09 15:19:20.779920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.051 [2024-12-09 15:19:20.779928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.789296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.789316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.800400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.800421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.800428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.811356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.811376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.811383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.819871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.819891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.819899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.831145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.831165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.831172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.052 [2024-12-09 15:19:20.839522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.052 [2024-12-09 15:19:20.839541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.052 [2024-12-09 15:19:20.839549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.852454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.852478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.860630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.860650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.860658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.872272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.872293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.872301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.884495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.884515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.884523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.894878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.894909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.902922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.902941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.902949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.915153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.915173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.915181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.925250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.925269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.925277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.933756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.933776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.933784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.942735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.942755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.942763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.953593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.953613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.953621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.965893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.965914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.965922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.976619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.976638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.976652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.988550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.988573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.988581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:20.996490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.310 [2024-12-09 15:19:20.996509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.310 [2024-12-09 15:19:20.996517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.310 [2024-12-09 15:19:21.006360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.006379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.006386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.018487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.018505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.018513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.030441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.030460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.030468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.038684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.038711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.050161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.050180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.050188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.059805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.059824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.068922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.068941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.077098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.077117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.077125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.089988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.090008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.090016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.311 [2024-12-09 15:19:21.100281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.311 [2024-12-09 15:19:21.100300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.311 [2024-12-09 15:19:21.100308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.110730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.110752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.110761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.118897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.118917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.118925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.130474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.130494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.130502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.142323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.142343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.142352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.155084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.155104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.155113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.165261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.165286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.165294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.174725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.174744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.174752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.182937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.182957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.182965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.193840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.193860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.193868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.201775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.201795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.201803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.212210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.212236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.212243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.221661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.221681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.221688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.233810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.233829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.233841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.242307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.242326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.242333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.252524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.252545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.252553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.262012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.262031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.262039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.270155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.270175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.270182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.279458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.279478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.279486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.289039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.289060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.289068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.298730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.298751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.569 [2024-12-09 15:19:21.298758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.569 [2024-12-09 15:19:21.307991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.569 [2024-12-09 15:19:21.308010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.308018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.317329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.317349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.317357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.326234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.326253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.326265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.335308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.335327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.335335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.344378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.344398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.344406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.353468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.353488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.353496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.570 [2024-12-09 15:19:21.362734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d3dd0) 00:26:19.570 [2024-12-09 15:19:21.362756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.570 [2024-12-09 15:19:21.362764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.828 24907.00 IOPS, 97.29 MiB/s 00:26:19.828 Latency(us) 00:26:19.828 [2024-12-09T14:19:21.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:19.828 nvme0n1 : 2.00 24920.31 97.34 0.00 0.00 5131.73 2512.21 18350.08 00:26:19.828 [2024-12-09T14:19:21.623Z] =================================================================================================================== 00:26:19.828 [2024-12-09T14:19:21.623Z] Total : 24920.31 97.34 0.00 0.00 5131.73 2512.21 18350.08 00:26:19.828 { 00:26:19.828 "results": [ 00:26:19.828 { 00:26:19.828 "job": "nvme0n1", 00:26:19.828 "core_mask": "0x2", 00:26:19.828 "workload": "randread", 00:26:19.828 "status": "finished", 00:26:19.828 "queue_depth": 128, 00:26:19.828 "io_size": 4096, 00:26:19.828 "runtime": 2.004068, 00:26:19.828 "iops": 24920.312085218666, 00:26:19.828 "mibps": 97.34496908288541, 00:26:19.828 "io_failed": 0, 00:26:19.828 "io_timeout": 0, 00:26:19.828 "avg_latency_us": 5131.728086561364, 00:26:19.828 "min_latency_us": 2512.213333333333, 00:26:19.828 "max_latency_us": 18350.08 00:26:19.828 } 00:26:19.828 ], 00:26:19.828 "core_count": 1 00:26:19.828 } 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.828 | .driver_specific 00:26:19.828 | .nvme_error 00:26:19.828 | .status_code 00:26:19.828 | .command_transient_transport_error' 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575210 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1575210 ']' 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1575210 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.828 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575210 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575210' 00:26:20.087 killing process with pid 1575210 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1575210 00:26:20.087 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.087 00:26:20.087 Latency(us) 00:26:20.087 [2024-12-09T14:19:21.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.087 [2024-12-09T14:19:21.882Z] =================================================================================================================== 00:26:20.087 [2024-12-09T14:19:21.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1575210 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575679 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575679 /var/tmp/bperf.sock 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1575679 ']' 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.087 15:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.087 [2024-12-09 15:19:21.851624] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:20.087 [2024-12-09 15:19:21.851671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575679 ] 00:26:20.087 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.087 Zero copy mechanism will not be used. 00:26:20.345 [2024-12-09 15:19:21.924459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.345 [2024-12-09 15:19:21.960446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.345 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.345 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:20.345 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.345 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.602 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.860 nvme0n1 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:20.860 15:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.860 Zero copy mechanism will not be used. 00:26:20.860 Running I/O for 2 seconds... 00:26:20.860 [2024-12-09 15:19:22.644012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:20.860 [2024-12-09 15:19:22.644048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.860 [2024-12-09 15:19:22.644059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:20.860 [2024-12-09 15:19:22.648198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:20.860 [2024-12-09 15:19:22.648235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.860 [2024-12-09 15:19:22.648245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:20.860 [2024-12-09 15:19:22.652485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:20.860 [2024-12-09 15:19:22.652510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.860 [2024-12-09 15:19:22.652520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.656840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.656870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.656879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.661161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.661184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.661192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.665633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.665655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.670096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.670116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.670124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.674832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.674853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.674861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.679543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.679564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.679572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.684490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.684510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.684518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.689394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.689416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.689424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.694503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.694523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.694531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.699644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.699664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.699672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.704725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.704746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.704754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.709759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.709788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.714788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.714810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.714817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.719819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.719840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.719848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.724826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.724855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.729806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.729826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.729833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.734914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.734935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.734943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.740019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.740044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.740051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.745160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.745188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.750215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.750242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.750249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.755294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.755315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.755322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.760392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.760414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.760421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.765513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.765533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.119 [2024-12-09 15:19:22.765541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.119 [2024-12-09 15:19:22.770583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.119 [2024-12-09 15:19:22.770604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.770612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.775636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.775657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.775665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.780676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.780696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.785783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.785803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.785811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.790875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.790896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.790903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.795997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.796017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.796025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.801074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.801095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.801103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.806132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.806153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.806161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.811204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.811229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.811238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.816303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.816324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.816331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.821376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.821397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.821404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.826456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.826476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.826488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.831539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.831560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.836677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.836699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.836707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.841773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.841794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.841802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.846849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.846870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.846878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.851908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.851927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.851934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.857099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.857120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.862231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.862252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.862260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.867391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.867412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.867420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.872549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.872576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.872584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.877640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.877661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.882791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.882811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.882819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.887972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.888000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.893083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.893103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.898193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.898213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.898226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.903392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.903413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.903421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.120 [2024-12-09 15:19:22.908544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.120 [2024-12-09 15:19:22.908565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.120 [2024-12-09 15:19:22.908573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.913655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.913679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.913689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.918840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.918864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.918873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.923993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.924014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.924022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.929137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.929159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.929167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.934241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.934262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.934269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.939313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.939334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.944457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.944484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.949696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.949717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.949725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.954837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.954858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.954866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.959978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.959999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.960011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.965139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.965159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.965167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.970315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.970335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.970344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.975448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.975469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.975477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.980495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.980515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.980523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.985633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.985653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.985661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.990715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.990736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.990744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:22.995827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:22.995848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:22.995855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.001040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.001060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:23.001068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.006184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.006205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:23.006213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.011337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.011358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:23.011366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.016511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.016532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:23.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.021640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.021660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.380 [2024-12-09 15:19:23.021667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.380 [2024-12-09 15:19:23.026803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.380 [2024-12-09 15:19:23.026824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.026831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.031880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.031900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.031907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.036976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.036996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.042350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.042371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.048735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.048757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.048770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.056049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.056070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.056078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.061898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.061920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.061928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.067735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.067756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.073528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.073549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.079373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.079396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.079403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.085226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.085254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.090914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.090934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.090942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.093993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.094013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.094021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.099896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.099921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.099930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.105354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.105376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.105384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.111135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.111155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.111164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.117525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.117547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.117555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.124785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.124806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.124814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.131933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.131954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.131963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.139504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.139526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.139534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.146560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.152478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.152499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.152507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.159201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.159230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.159239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.165927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.165948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.165957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.381 [2024-12-09 15:19:23.171278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.381 [2024-12-09 15:19:23.171304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.381 [2024-12-09 15:19:23.171316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.176406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.176429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.181558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.181580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.181589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.186711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.186732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.186740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.191851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.191872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.191880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.196960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.196980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.196989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.202072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.202094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.202105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.207168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.207188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.207196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.212323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.212344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.212352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.217440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.217460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.217468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.222510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.222531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.222539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.227415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.227444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.232570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.232592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.232599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.237701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.237721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.237729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.242645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.242666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.242674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.247801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.247826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.247834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.252708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.252730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.257646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.257667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.257675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.262632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.262654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.262662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.267550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.267572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.267581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.641 [2024-12-09 15:19:23.272539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.641 [2024-12-09 15:19:23.272560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.641 [2024-12-09 15:19:23.272569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.277643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.277666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.277674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.282882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.282903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.282910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.287990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.288011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.288018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.293128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.293149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.293157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.298335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.298355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.298363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.303463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.303484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.303492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.308651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.308672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.308680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.313758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.313779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.313786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.318861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.318891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.323902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.323923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.323931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.329352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.329375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.329383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.334552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.334572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.334584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.340081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.340103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.340111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.345292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.345314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.345322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.350920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.350941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.350949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.356208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.356244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.356252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.361415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.361435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.361443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.366554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.366575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.366583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.371580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.371601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.371608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.374571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.374591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.374599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.380245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.380266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.380274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.385464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.385484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.385492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.390434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.390455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.390463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.395648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.395669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.395677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.400846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.400867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.400875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.406035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.406056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.406063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.411238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.411259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.642 [2024-12-09 15:19:23.411267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.642 [2024-12-09 15:19:23.416410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.642 [2024-12-09 15:19:23.416430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.643 [2024-12-09 15:19:23.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.643 [2024-12-09 15:19:23.421606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.643 [2024-12-09 15:19:23.421626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.643 [2024-12-09 15:19:23.421637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.643 [2024-12-09 15:19:23.427560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.643 [2024-12-09 15:19:23.427581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.643 [2024-12-09 15:19:23.427589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.643 [2024-12-09 15:19:23.433435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.643 [2024-12-09 15:19:23.433459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.643 [2024-12-09 15:19:23.433468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.438716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.438741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.438750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.443636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.443661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.443669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.448540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.448562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.448571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.453677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.453699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.453707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.458871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.458894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.458902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.463986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.464007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.464015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.469108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.469133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.469142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.474275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.474295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.474303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.479384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.479405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.479412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.484493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.484515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.489669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.489691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.489698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.494806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.494827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.494835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.499984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.500006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.500014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.505213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.505239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.505247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.510375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.510396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.510403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.515542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.515571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.520666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.520687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.520695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.525763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.525785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.525793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.530946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.530976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.536071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.536092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.536100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.541272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.541294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.541302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.546451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.546472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.902 [2024-12-09 15:19:23.546480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.902 [2024-12-09 15:19:23.551580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.902 [2024-12-09 15:19:23.551600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.551609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.556722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.556744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.556758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.561944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.561966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.561975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.567228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.567248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.567255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.572461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.572482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.572490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.577684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.577704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.577711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.582666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.582687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.582696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.587888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.587909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.587916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.593176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.593199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.593207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.598153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.598174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.598181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.603127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.603148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.603156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.608242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.608271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.613460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.613481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.613488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.618684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.618706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.618714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.623797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.623818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.623826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.628962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.628984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.628993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.634102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.634122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.634131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.639304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.639333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 5929.00 IOPS, 741.12 MiB/s [2024-12-09T14:19:23.698Z] [2024-12-09 15:19:23.645317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.645339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.650856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.650878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.650886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.657092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.657113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.657121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.662291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.662312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.662320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.667341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.667363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.667371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.672301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.672322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.677373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.677394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.677402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.682321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.903 [2024-12-09 15:19:23.682350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.903 [2024-12-09 15:19:23.687444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.903 [2024-12-09 15:19:23.687465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.904 [2024-12-09 15:19:23.687473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.904 [2024-12-09 15:19:23.692672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:21.904 [2024-12-09 15:19:23.692699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.904 [2024-12-09 15:19:23.692708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.697926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.697950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.703090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.703113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.703122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.708341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.708362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.708370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.713563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.713584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.713592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.718741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.718762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.718770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.723849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.723869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.723877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.729122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.729143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.729151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.734371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.734392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.734399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.739512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.744642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.744663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.744671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.749723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.749743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.749751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.754763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.754783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.754791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.759926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.759947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.759954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.765075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.765095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.765103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.770381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.770401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.770409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.775712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.775732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.775739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.781007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.781028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.781040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.786404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.786424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.786432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.791505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.791525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.791534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.796625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.796645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.796653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.801738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.801759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.801767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.806829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.806850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.806858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.163 [2024-12-09 15:19:23.811985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.163 [2024-12-09 15:19:23.812006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.163 [2024-12-09 15:19:23.812014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.817150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.817171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.817179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.822436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.822457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.822465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.827528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.827549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.827557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.832655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.832675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.837829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.837857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.842995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.843016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.843023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.848148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.848168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.848175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.853265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.853285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.853293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.858445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.858467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.858475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.863547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.863576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.868936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.868955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.868967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.874495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.874517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.874525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.879718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.879739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.879747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.884828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.884850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.884857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.890055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.890075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.895196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.895224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.895232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.900315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.900335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.900342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.905441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.905462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.905470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.910581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.910602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.910610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.915733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.915758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.915766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.920926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.920947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.920955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.926078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.926098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.926106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.931187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.931208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.931216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.936377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.936398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.936405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.941510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.941530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.941537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.946650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.946671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.946679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.164 [2024-12-09 15:19:23.951844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.164 [2024-12-09 15:19:23.951864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.164 [2024-12-09 15:19:23.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.957070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.957094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.957103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.962208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.962237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.962247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.967543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.967564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.967572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.972891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.972913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.978139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.978159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.978167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.983311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.983332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.983340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.988423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.988443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.423 [2024-12-09 15:19:23.988451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.423 [2024-12-09 15:19:23.993524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.423 [2024-12-09 15:19:23.993544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:23.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:23.998641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:23.998661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:23.998670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.003781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.003801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.003812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.008941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.008961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.008969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.014028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.014048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.014056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.019194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.019215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.019229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.024326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.024347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.024355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.029448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.029469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.029476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.034567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.034587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.034595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.039686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.039707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.039714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.044861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.044882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.049951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.049974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.049982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.055099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.055119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.055127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.060238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.060258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.060267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.065355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.065375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.065383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.070591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.070611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.070619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.075918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.075938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.075945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.081269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.081290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.081298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.086510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.086530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.086537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.091621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.091642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.091653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.096659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.096680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.096687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.101929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.101949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.101957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.107176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.107196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.107205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.112320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.112340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.112347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.117355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.117376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.117384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.122749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.122770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.122778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.127999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.128019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.128027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.133121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.424 [2024-12-09 15:19:24.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.424 [2024-12-09 15:19:24.133147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.424 [2024-12-09 15:19:24.138254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.138277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.138285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.143351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.143371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.143379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.149140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.149162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.149170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.154074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.154103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.158879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.158900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.158907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.163895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.163916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.163924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.168625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.168647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.168656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.173681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.173701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.173709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.178727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.178755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.184174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.184195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.184202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.189719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.189739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.189747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.195030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.195051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.195059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.200443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.200463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.200471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.205525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.205546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.205554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.210658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.210678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.210686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.425 [2024-12-09 15:19:24.215836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.425 [2024-12-09 15:19:24.215860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.425 [2024-12-09 15:19:24.215870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.684 [2024-12-09 15:19:24.220939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.684 [2024-12-09 15:19:24.220963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.684 [2024-12-09 15:19:24.220972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.684 [2024-12-09 15:19:24.226128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.684 [2024-12-09 15:19:24.226150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.684 [2024-12-09 15:19:24.226164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.684 [2024-12-09 15:19:24.231310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.684 [2024-12-09 15:19:24.231330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.684 [2024-12-09 15:19:24.231337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.684 [2024-12-09 15:19:24.236438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.684 [2024-12-09 15:19:24.236459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.684 [2024-12-09 15:19:24.236467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.684 [2024-12-09 15:19:24.241538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.241559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.246644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.246665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.246672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.251760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.251781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.251789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.256823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.256843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.256851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.261929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.261949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.261957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.267060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.267081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.267089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.272177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.272201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.272209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.277361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.277381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.277389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.282444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.282464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.282472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.287586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.287607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.287614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.292757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.292778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.292786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.298033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.298054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.298061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.303647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.303668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.303676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.309017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.309038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.309045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.314262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.314282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.314289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.319453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.324584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.324604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.324612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.329750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.329779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.334821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.334841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.334849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.339884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.339904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.339912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.345075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.345095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.345103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.350226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.350245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.350253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.355475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.355496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.355504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.360678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.360702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.360709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.366130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.366159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.371355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.371376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.685 [2024-12-09 15:19:24.371384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.685 [2024-12-09 15:19:24.376436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.685 [2024-12-09 15:19:24.376457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.376465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.381685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.381705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.381713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.387030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.387050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.387058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.392534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.392554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.392562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.397746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.397767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.397774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.402909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.402930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.402937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.408052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.408073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.408081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.413229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.413249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.413256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.418345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.418365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.418373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.423468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.423489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.423497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.428556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.428576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.428584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.433647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.433667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.433675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.438770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.438798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.443897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.443918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.443926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.449036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.449056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.449068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.454094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.454114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.454122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.459190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.459210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.459223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.464375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.464403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.469710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.469731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.469738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.686 [2024-12-09 15:19:24.475142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.686 [2024-12-09 15:19:24.475165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.686 [2024-12-09 15:19:24.475174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.480529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.480560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.485678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.485701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.485710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.490760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.490781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.495755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.495779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.495786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.500784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.500814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.505811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.505832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.505839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.510757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.510777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.510785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.516047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.516067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.516075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.521404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.521424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.526577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.526597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.526605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.531801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.531822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.531830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.536948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.536969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.536977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.542157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.542178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.945 [2024-12-09 15:19:24.542185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.945 [2024-12-09 15:19:24.547317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.945 [2024-12-09 15:19:24.547338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.547345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.552414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.552435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.552442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.557420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.557441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.557449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.562602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.562622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.562630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.567743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.567764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.567771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.572785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.572806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.572814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.577926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.577946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.577954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.583032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.583053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.583064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.588057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.588077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.588084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.593188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.593208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.593216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.598302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.598322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.598330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.603366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.603395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.608561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.608580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.608588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.613658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.613678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.613687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.618778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.618799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.618807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.623930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.623952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.623960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.629141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.629162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.629169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.634328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.634350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.634357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.946 [2024-12-09 15:19:24.639778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2123420) 00:26:22.946 [2024-12-09 15:19:24.639800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.946 [2024-12-09 15:19:24.639809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:22.946 5956.50 IOPS, 744.56 MiB/s 00:26:22.946 Latency(us) 00:26:22.946 [2024-12-09T14:19:24.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.946 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:22.946 nvme0n1 : 2.00 5957.48 744.69 0.00 0.00 2682.81 686.57 9237.46 00:26:22.946 [2024-12-09T14:19:24.741Z] =================================================================================================================== 00:26:22.946 [2024-12-09T14:19:24.741Z] Total : 5957.48 744.69 0.00 0.00 2682.81 686.57 9237.46 00:26:22.946 { 00:26:22.946 "results": [ 00:26:22.946 { 00:26:22.946 "job": "nvme0n1", 00:26:22.946 "core_mask": "0x2", 00:26:22.946 "workload": "randread", 00:26:22.946 "status": "finished", 00:26:22.946 "queue_depth": 16, 00:26:22.946 "io_size": 131072, 00:26:22.946 "runtime": 2.002356, 00:26:22.946 "iops": 5957.482086102571, 00:26:22.946 "mibps": 744.6852607628214, 00:26:22.946 "io_failed": 0, 00:26:22.946 "io_timeout": 0, 00:26:22.946 "avg_latency_us": 2682.805522516157, 00:26:22.946 "min_latency_us": 686.567619047619, 00:26:22.946 "max_latency_us": 9237.455238095237 00:26:22.946 } 00:26:22.946 ], 00:26:22.946 "core_count": 1 00:26:22.946 } 00:26:22.946 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:22.946 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:22.946 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:22.946 | .driver_specific 00:26:22.946 | .nvme_error 00:26:22.946 | .status_code 00:26:22.946 | .command_transient_transport_error' 00:26:22.946 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575679 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1575679 ']' 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1575679 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575679 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575679' 00:26:23.205 killing process with pid 1575679 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1575679 00:26:23.205 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.205 00:26:23.205 Latency(us) 00:26:23.205 [2024-12-09T14:19:25.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.205 [2024-12-09T14:19:25.000Z] =================================================================================================================== 00:26:23.205 [2024-12-09T14:19:25.000Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.205 15:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1575679 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1576352 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1576352 /var/tmp/bperf.sock 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1576352 ']' 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.463 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.463 [2024-12-09 15:19:25.123137] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:23.463 [2024-12-09 15:19:25.123184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576352 ] 00:26:23.463 [2024-12-09 15:19:25.195403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.463 [2024-12-09 15:19:25.233928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.720 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.720 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:23.720 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.720 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.978 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.236 nvme0n1 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:24.236 15:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.236 Running I/O for 2 seconds... 00:26:24.236 [2024-12-09 15:19:25.943672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee01f8 00:26:24.236 [2024-12-09 15:19:25.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.944687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.952291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef4298 00:26:24.236 [2024-12-09 15:19:25.953269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.953292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.961739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6890 00:26:24.236 [2024-12-09 15:19:25.962833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.962853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.971136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eef270 00:26:24.236 [2024-12-09 15:19:25.972293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.972312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.980527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:24.236 [2024-12-09 15:19:25.981834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.981853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.989984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee99d8 00:26:24.236 [2024-12-09 15:19:25.991423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:25.998311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efe720 00:26:24.236 [2024-12-09 15:19:25.999397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:25.999415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:26.006474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee01f8 00:26:24.236 [2024-12-09 15:19:26.007832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:26.007850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:26.015537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef4298 00:26:24.236 [2024-12-09 15:19:26.016609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:26.016627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.236 [2024-12-09 15:19:26.024414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efe2e8 00:26:24.236 [2024-12-09 15:19:26.025382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-12-09 15:19:26.025401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.033712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6cc8 00:26:24.495 [2024-12-09 15:19:26.034710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.034732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.042722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef5be8 00:26:24.495 [2024-12-09 15:19:26.043714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.051879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eec840 00:26:24.495 [2024-12-09 15:19:26.052610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.052630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.060329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eddc00 00:26:24.495 [2024-12-09 15:19:26.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.061698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.068588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0788 00:26:24.495 [2024-12-09 15:19:26.069335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.069354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.076915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee2c28 00:26:24.495 [2024-12-09 15:19:26.077625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.077644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.087945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efa3a0 00:26:24.495 [2024-12-09 15:19:26.088919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.088938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.097353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed4e8 00:26:24.495 [2024-12-09 15:19:26.098573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.098592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.107183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6b70 00:26:24.495 [2024-12-09 15:19:26.108670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.108689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.113934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ede470 00:26:24.495 [2024-12-09 15:19:26.114669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.114688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.123249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee7818 00:26:24.495 [2024-12-09 15:19:26.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.495 [2024-12-09 15:19:26.124038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.495 [2024-12-09 15:19:26.134054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee2c28 00:26:24.496 [2024-12-09 15:19:26.135186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.142461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee27f0 00:26:24.496 [2024-12-09 15:19:26.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.143458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.151206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:24.496 [2024-12-09 15:19:26.152190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.152208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.160611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed0b0 00:26:24.496 [2024-12-09 15:19:26.161619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.161639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.169677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efe720 00:26:24.496 [2024-12-09 15:19:26.170314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.178741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efeb58 00:26:24.496 [2024-12-09 15:19:26.179633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.179652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.187951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef4f40 00:26:24.496 [2024-12-09 15:19:26.189053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.189072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.197055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef3e60 00:26:24.496 [2024-12-09 15:19:26.198161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.198181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.205583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef96f8 00:26:24.496 [2024-12-09 15:19:26.206624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.206644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.214808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:24.496 [2024-12-09 15:19:26.215465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.225081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef46d0 00:26:24.496 [2024-12-09 15:19:26.226543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.226562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.234513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efa3a0 00:26:24.496 [2024-12-09 15:19:26.236071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.236090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.241076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eec408 00:26:24.496 [2024-12-09 15:19:26.241932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.241950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.252849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeea00 00:26:24.496 [2024-12-09 15:19:26.254449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.254467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.259192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb328 00:26:24.496 [2024-12-09 15:19:26.259906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.259925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.269344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf988 00:26:24.496 [2024-12-09 15:19:26.270471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.270490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.278722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef9b30 00:26:24.496 [2024-12-09 15:19:26.280062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.496 [2024-12-09 15:19:26.280082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.496 [2024-12-09 15:19:26.288242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eddc00 00:26:24.755 [2024-12-09 15:19:26.289708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.289730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.294982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5ec8 00:26:24.755 [2024-12-09 15:19:26.295742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.295766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.306792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf988 00:26:24.755 [2024-12-09 15:19:26.308269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.308289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.313387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeaab8 00:26:24.755 [2024-12-09 15:19:26.314088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.314106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.324203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef9b30 00:26:24.755 [2024-12-09 15:19:26.325265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.325284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.333204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efeb58 00:26:24.755 [2024-12-09 15:19:26.334364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.334383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.341450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee84c0 00:26:24.755 [2024-12-09 15:19:26.342821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.342839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.350571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef35f0 00:26:24.755 [2024-12-09 15:19:26.351634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.351652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.359410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5a90 00:26:24.755 [2024-12-09 15:19:26.360314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.360332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.369021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef57b0 00:26:24.755 [2024-12-09 15:19:26.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.370288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.377364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6b70 00:26:24.755 [2024-12-09 15:19:26.378162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.378185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.385791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee99d8 00:26:24.755 [2024-12-09 15:19:26.386617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.386635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.394691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee73e0 00:26:24.755 [2024-12-09 15:19:26.395378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.395396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.403690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5ec8 00:26:24.755 [2024-12-09 15:19:26.404598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.404617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.413145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee9e10 00:26:24.755 [2024-12-09 15:19:26.414161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.414180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.422317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eef6a8 00:26:24.755 [2024-12-09 15:19:26.422887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.422907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.431036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed920 00:26:24.755 [2024-12-09 15:19:26.431889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.431908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.439937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee27f0 00:26:24.755 [2024-12-09 15:19:26.440616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.440635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.448389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef96f8 00:26:24.755 [2024-12-09 15:19:26.449070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.449088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.459471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6458 00:26:24.755 [2024-12-09 15:19:26.460551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.460570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.468106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:24.755 [2024-12-09 15:19:26.469147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.755 [2024-12-09 15:19:26.469166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.755 [2024-12-09 15:19:26.476622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:24.755 [2024-12-09 15:19:26.477657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.477675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.487854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef92c0 00:26:24.756 [2024-12-09 15:19:26.489348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.489367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.494210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6b70 00:26:24.756 [2024-12-09 15:19:26.494897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.494914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.502689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eea680 00:26:24.756 [2024-12-09 15:19:26.503329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.503347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.512548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eea248 00:26:24.756 [2024-12-09 15:19:26.513303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.513322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.522191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee5c8 00:26:24.756 [2024-12-09 15:19:26.523138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.523156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.531027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee27f0 00:26:24.756 [2024-12-09 15:19:26.531948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.531966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.539939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef5be8 00:26:24.756 [2024-12-09 15:19:26.540841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.756 [2024-12-09 15:19:26.540859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.756 [2024-12-09 15:19:26.548989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee190 00:26:25.014 [2024-12-09 15:19:26.549940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.558076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeaab8 00:26:25.014 [2024-12-09 15:19:26.559024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.559045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.566985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eddc00 00:26:25.014 [2024-12-09 15:19:26.567900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.567919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.575866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed4e8 00:26:25.014 [2024-12-09 15:19:26.576769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.576787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.584821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8618 00:26:25.014 [2024-12-09 15:19:26.585755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.593810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5220 00:26:25.014 [2024-12-09 15:19:26.594781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.594800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.602147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee23b8 00:26:25.014 [2024-12-09 15:19:26.603072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.603091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.610821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee0a68 00:26:25.014 [2024-12-09 15:19:26.611522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.611544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.619791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8618 00:26:25.014 [2024-12-09 15:19:26.620502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.620522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.629272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee12d8 00:26:25.014 [2024-12-09 15:19:26.630191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.630209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.638484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef35f0 00:26:25.014 [2024-12-09 15:19:26.639000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.648166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee5c8 00:26:25.014 [2024-12-09 15:19:26.649074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.649093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.657779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeee38 00:26:25.014 [2024-12-09 15:19:26.658999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.659018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.666964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee4578 00:26:25.014 [2024-12-09 15:19:26.667771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.014 [2024-12-09 15:19:26.667790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.014 [2024-12-09 15:19:26.675296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef92c0 00:26:25.014 [2024-12-09 15:19:26.676306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.676325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.684848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef7100 00:26:25.015 [2024-12-09 15:19:26.685714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.685733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.693188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef92c0 00:26:25.015 [2024-12-09 15:19:26.694136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.694154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.704411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed0b0 00:26:25.015 [2024-12-09 15:19:26.705892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.712856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf118 00:26:25.015 [2024-12-09 15:19:26.713843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.722835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee7818 00:26:25.015 [2024-12-09 15:19:26.724431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.724449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.729290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee4de8 00:26:25.015 [2024-12-09 15:19:26.729923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.729941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.738200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:25.015 [2024-12-09 15:19:26.739085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.739104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.749369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6020 00:26:25.015 [2024-12-09 15:19:26.750595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.750614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.756718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf988 00:26:25.015 [2024-12-09 15:19:26.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.757372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.765884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee9e10 00:26:25.015 [2024-12-09 15:19:26.766742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.766761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.774386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:25.015 [2024-12-09 15:19:26.775150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.775168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.783717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8a50 00:26:25.015 [2024-12-09 15:19:26.784558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.784577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.792763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef3e60 00:26:25.015 [2024-12-09 15:19:26.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.793531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.015 [2024-12-09 15:19:26.801723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee01f8 00:26:25.015 [2024-12-09 15:19:26.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.015 [2024-12-09 15:19:26.802529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.810923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeaef0 00:26:25.273 [2024-12-09 15:19:26.811701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.811728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.820414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ede470 00:26:25.273 [2024-12-09 15:19:26.821521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.821542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.829845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef7538 00:26:25.273 [2024-12-09 15:19:26.831072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.831092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.838167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6b70 00:26:25.273 [2024-12-09 15:19:26.838958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.838976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.847059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8618 00:26:25.273 [2024-12-09 15:19:26.847825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.847847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.856059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef4298 00:26:25.273 [2024-12-09 15:19:26.856802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.273 [2024-12-09 15:19:26.856820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.273 [2024-12-09 15:19:26.867204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eebfd0 00:26:25.274 [2024-12-09 15:19:26.868797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.868815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.873573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee3d08 00:26:25.274 [2024-12-09 15:19:26.874300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.874319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.882670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efa7d8 00:26:25.274 [2024-12-09 15:19:26.883349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.883369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.891660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efac10 00:26:25.274 [2024-12-09 15:19:26.892304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.892322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.900562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efa3a0 00:26:25.274 [2024-12-09 15:19:26.901201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.901223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.910010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eddc00 00:26:25.274 [2024-12-09 15:19:26.911015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.919978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:25.274 [2024-12-09 15:19:26.921030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.921049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.929491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0bc0 00:26:25.274 [2024-12-09 15:19:26.930868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.930896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.937550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef2948 00:26:25.274 28201.00 IOPS, 110.16 MiB/s [2024-12-09T14:19:27.069Z] [2024-12-09 15:19:26.939246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.939264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.947949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf550 00:26:25.274 [2024-12-09 15:19:26.949412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.957127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef5378 00:26:25.274 [2024-12-09 15:19:26.958603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.958622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.964755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efe720 00:26:25.274 [2024-12-09 15:19:26.965450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.965480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.974034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef5378 00:26:25.274 [2024-12-09 15:19:26.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.974808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.982277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efb480 00:26:25.274 [2024-12-09 15:19:26.983153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.983171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.992744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6020 00:26:25.274 [2024-12-09 15:19:26.994236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:26.994255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:26.999351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5220 00:26:25.274 [2024-12-09 15:19:26.999984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.000002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.010269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb328 00:26:25.274 [2024-12-09 15:19:27.011362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.011381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.018737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef1868 00:26:25.274 [2024-12-09 15:19:27.019874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.019893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.027974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efb8b8 00:26:25.274 [2024-12-09 15:19:27.028668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.028686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.036242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efdeb0 00:26:25.274 [2024-12-09 15:19:27.037047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.037065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.046408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee5658 00:26:25.274 [2024-12-09 15:19:27.047587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.047605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.054507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee1f80 00:26:25.274 [2024-12-09 15:19:27.055576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.055594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.274 [2024-12-09 15:19:27.063475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb760 00:26:25.274 [2024-12-09 15:19:27.064275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.274 [2024-12-09 15:19:27.064295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.072180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef81e0 00:26:25.533 [2024-12-09 15:19:27.073084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.073105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.081415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee88f8 00:26:25.533 [2024-12-09 15:19:27.081828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.090774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee8088 00:26:25.533 [2024-12-09 15:19:27.091414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.091434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.099937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efbcf0 00:26:25.533 [2024-12-09 15:19:27.100954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.100973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.109342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef9b30 00:26:25.533 [2024-12-09 15:19:27.110507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.110527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.118779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef5be8 00:26:25.533 [2024-12-09 15:19:27.120032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.120052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.127137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edfdc0 00:26:25.533 [2024-12-09 15:19:27.128065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.128084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.137565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee3d08 00:26:25.533 [2024-12-09 15:19:27.138910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.138929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.145978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb328 00:26:25.533 [2024-12-09 15:19:27.146899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.154315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee8d30 00:26:25.533 [2024-12-09 15:19:27.155223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.155241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.163880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee190 00:26:25.533 [2024-12-09 15:19:27.164920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.164938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.172424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee9e10 00:26:25.533 [2024-12-09 15:19:27.173084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.173103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.181411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef57b0 00:26:25.533 [2024-12-09 15:19:27.182085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.182105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.191585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef96f8 00:26:25.533 [2024-12-09 15:19:27.192836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.192854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.199961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc998 00:26:25.533 [2024-12-09 15:19:27.200754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.200773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.208285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6cc8 00:26:25.533 [2024-12-09 15:19:27.209181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.209200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.219596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef7538 00:26:25.533 [2024-12-09 15:19:27.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.221035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.228034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef96f8 00:26:25.533 [2024-12-09 15:19:27.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.228993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.236855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef57b0 00:26:25.533 [2024-12-09 15:19:27.237753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.237772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.246001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efb048 00:26:25.533 [2024-12-09 15:19:27.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.247158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.256195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed0b0 00:26:25.533 [2024-12-09 15:19:27.257857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.257874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.262834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6fa8 00:26:25.533 [2024-12-09 15:19:27.263716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.263734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.272807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb328 00:26:25.533 [2024-12-09 15:19:27.273717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.273735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.281997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efda78 00:26:25.533 [2024-12-09 15:19:27.283100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.283118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.290540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf118 00:26:25.533 [2024-12-09 15:19:27.291661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.533 [2024-12-09 15:19:27.291680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.533 [2024-12-09 15:19:27.299640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:25.534 [2024-12-09 15:19:27.300784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.534 [2024-12-09 15:19:27.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.534 [2024-12-09 15:19:27.307715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc560 00:26:25.534 [2024-12-09 15:19:27.308699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.534 [2024-12-09 15:19:27.308718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.534 [2024-12-09 15:19:27.316688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf988 00:26:25.534 [2024-12-09 15:19:27.317503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.534 [2024-12-09 15:19:27.317524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.534 [2024-12-09 15:19:27.325723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:25.534 [2024-12-09 15:19:27.326615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.534 [2024-12-09 15:19:27.326641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.335869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed920 00:26:25.792 [2024-12-09 15:19:27.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.336828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.345122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eea248 00:26:25.792 [2024-12-09 15:19:27.346267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.346287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.352486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efd208 00:26:25.792 [2024-12-09 15:19:27.353045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.361462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0350 00:26:25.792 [2024-12-09 15:19:27.361989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.362008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.370610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef81e0 00:26:25.792 [2024-12-09 15:19:27.371022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.371041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.381938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edf550 00:26:25.792 [2024-12-09 15:19:27.383409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.383428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.388333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:25.792 [2024-12-09 15:19:27.388994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.389013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.398639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee84c0 00:26:25.792 [2024-12-09 15:19:27.399739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.399758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.406971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0ff8 00:26:25.792 [2024-12-09 15:19:27.407632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.407650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.415147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee8d30 00:26:25.792 [2024-12-09 15:19:27.415860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.425145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efdeb0 00:26:25.792 [2024-12-09 15:19:27.425960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.425979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.433474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee95a0 00:26:25.792 [2024-12-09 15:19:27.434333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.792 [2024-12-09 15:19:27.434354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.792 [2024-12-09 15:19:27.444593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef81e0 00:26:25.792 [2024-12-09 15:19:27.446005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.446024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.451251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edece0 00:26:25.793 [2024-12-09 15:19:27.451910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.451928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.462487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef7100 00:26:25.793 [2024-12-09 15:19:27.463618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.463637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.469784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efcdd0 00:26:25.793 [2024-12-09 15:19:27.470452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.470471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.480928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6300 00:26:25.793 [2024-12-09 15:19:27.482072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.482091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.489307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eec840 00:26:25.793 [2024-12-09 15:19:27.489992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.490011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.498434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee01f8 00:26:25.793 [2024-12-09 15:19:27.499031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.499050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.507580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeaef0 00:26:25.793 [2024-12-09 15:19:27.508398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.516751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb328 00:26:25.793 [2024-12-09 15:19:27.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.517868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.526141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee12d8 00:26:25.793 [2024-12-09 15:19:27.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.527343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.533938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee4de8 00:26:25.793 [2024-12-09 15:19:27.534433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.534452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.543271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef6cc8 00:26:25.793 [2024-12-09 15:19:27.543996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.544016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.551770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee23b8 00:26:25.793 [2024-12-09 15:19:27.552484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.552507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.561332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0788 00:26:25.793 [2024-12-09 15:19:27.562152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.562170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.571294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee49b0 00:26:25.793 [2024-12-09 15:19:27.572264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.793 [2024-12-09 15:19:27.580727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeb760 00:26:25.793 [2024-12-09 15:19:27.581959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.793 [2024-12-09 15:19:27.581977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:26.051 [2024-12-09 15:19:27.588029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc128 00:26:26.051 [2024-12-09 15:19:27.588757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.051 [2024-12-09 15:19:27.588779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.599371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee190 00:26:26.052 [2024-12-09 15:19:27.600580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.600602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.607582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef1868 00:26:26.052 [2024-12-09 15:19:27.609011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.609030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.616657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee8d30 00:26:26.052 [2024-12-09 15:19:27.617749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.617769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.625713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee0a68 00:26:26.052 [2024-12-09 15:19:27.626670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.626690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.634669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeff18 00:26:26.052 [2024-12-09 15:19:27.635640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.644367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eef270 00:26:26.052 [2024-12-09 15:19:27.645453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.645473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.653762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8e88 00:26:26.052 [2024-12-09 15:19:27.654944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.663143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eed4e8 00:26:26.052 [2024-12-09 15:19:27.664453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.664471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.669659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc128 00:26:26.052 [2024-12-09 15:19:27.670330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.679608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eecc78 00:26:26.052 [2024-12-09 15:19:27.680353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.680372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.687968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eef270 00:26:26.052 [2024-12-09 15:19:27.688806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.688826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.697965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6738 00:26:26.052 [2024-12-09 15:19:27.698852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.698872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.707397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee23b8 00:26:26.052 [2024-12-09 15:19:27.708523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.708543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.716614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef3e60 00:26:26.052 [2024-12-09 15:19:27.717370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.717389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.725309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eee5c8 00:26:26.052 [2024-12-09 15:19:27.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.726373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.734700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef2510 00:26:26.052 [2024-12-09 15:19:27.735863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.735882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.742996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee4de8 00:26:26.052 [2024-12-09 15:19:27.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.751788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee6300 00:26:26.052 [2024-12-09 15:19:27.752622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.752641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.761900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee1f80 00:26:26.052 [2024-12-09 15:19:27.763179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.763198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.770479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee1710 00:26:26.052 [2024-12-09 15:19:27.771445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.778836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee1710 00:26:26.052 [2024-12-09 15:19:27.779796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.779814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.788059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc560 00:26:26.052 [2024-12-09 15:19:27.788640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.788663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.797211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee4578 00:26:26.052 [2024-12-09 15:19:27.798117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.798135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.806165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee2c28 00:26:26.052 [2024-12-09 15:19:27.807082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.807100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.815355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efc128 00:26:26.052 [2024-12-09 15:19:27.816042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.816060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.825659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eea680 00:26:26.052 [2024-12-09 15:19:27.827065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:26.052 [2024-12-09 15:19:27.831998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee3060 00:26:26.052 [2024-12-09 15:19:27.832605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.052 [2024-12-09 15:19:27.832624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:26.053 [2024-12-09 15:19:27.841466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edece0 00:26:26.053 [2024-12-09 15:19:27.842189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.053 [2024-12-09 15:19:27.842211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.850771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efe720 00:26:26.311 [2024-12-09 15:19:27.851564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.851587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.860899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016eeaab8 00:26:26.311 [2024-12-09 15:19:27.862138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.862159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.869547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef8e88 00:26:26.311 [2024-12-09 15:19:27.870644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.878693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee7818 00:26:26.311 [2024-12-09 15:19:27.879665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.879683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.888268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee23b8 00:26:26.311 [2024-12-09 15:19:27.889332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.889351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.897374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef35f0 00:26:26.311 [2024-12-09 15:19:27.898462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.906240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ee9168 00:26:26.311 [2024-12-09 15:19:27.906939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.906960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.916500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef7970 00:26:26.311 [2024-12-09 15:19:27.918005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.918023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.922831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016efdeb0 00:26:26.311 [2024-12-09 15:19:27.923581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.923600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:26.311 [2024-12-09 15:19:27.931376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016edece0 00:26:26.311 [2024-12-09 15:19:27.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.932071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:26.311 28240.50 IOPS, 110.31 MiB/s [2024-12-09T14:19:28.106Z] [2024-12-09 15:19:27.940767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21546c0) with pdu=0x200016ef0788 00:26:26.311 [2024-12-09 15:19:27.941537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.311 [2024-12-09 15:19:27.941554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.311 00:26:26.311 Latency(us) 00:26:26.311 [2024-12-09T14:19:28.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.311 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:26.311 nvme0n1 : 2.00 28253.14 110.36 0.00 0.00 4526.22 1903.66 12483.05 00:26:26.311 [2024-12-09T14:19:28.106Z] =================================================================================================================== 00:26:26.311 [2024-12-09T14:19:28.106Z] Total : 28253.14 110.36 0.00 0.00 4526.22 1903.66 12483.05 00:26:26.311 { 00:26:26.311 "results": [ 00:26:26.311 { 00:26:26.311 "job": "nvme0n1", 00:26:26.311 "core_mask": "0x2", 00:26:26.311 "workload": "randwrite", 00:26:26.311 "status": "finished", 00:26:26.311 "queue_depth": 128, 00:26:26.311 "io_size": 4096, 00:26:26.311 "runtime": 2.003636, 00:26:26.311 "iops": 28253.135799117204, 00:26:26.311 "mibps": 110.36381171530158, 00:26:26.311 "io_failed": 0, 00:26:26.311 "io_timeout": 0, 00:26:26.311 "avg_latency_us": 4526.216425488459, 00:26:26.311 "min_latency_us": 1903.664761904762, 00:26:26.311 "max_latency_us": 12483.047619047618 00:26:26.311 } 00:26:26.311 ], 00:26:26.311 "core_count": 1 00:26:26.311 } 00:26:26.311 15:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:26.311 15:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:26.311 15:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:26.311 | .driver_specific 00:26:26.311 | .nvme_error 00:26:26.311 | .status_code 00:26:26.311 | .command_transient_transport_error' 00:26:26.311 15:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1576352 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1576352 ']' 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1576352 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576352 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576352' 00:26:26.569 killing process with pid 1576352 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1576352 00:26:26.569 Received shutdown signal, test time was about 2.000000 seconds 00:26:26.569 00:26:26.569 Latency(us) 00:26:26.569 [2024-12-09T14:19:28.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.569 [2024-12-09T14:19:28.364Z] =================================================================================================================== 00:26:26.569 [2024-12-09T14:19:28.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.569 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1576352 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1576817 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1576817 /var/tmp/bperf.sock 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1576817 ']' 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.828 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.828 [2024-12-09 15:19:28.426306] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:26.828 [2024-12-09 15:19:28.426351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576817 ] 00:26:26.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.828 Zero copy mechanism will not be used. 00:26:26.828 [2024-12-09 15:19:28.500016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.828 [2024-12-09 15:19:28.536691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.086 15:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.651 nvme0n1 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:27.651 15:19:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.651 Zero copy mechanism will not be used. 00:26:27.651 Running I/O for 2 seconds... 00:26:27.651 [2024-12-09 15:19:29.368393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.368469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.368498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.372857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.372939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.377214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.377305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.381631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.381688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.385929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.385982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.386001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.390207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.390270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.390289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.394518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.394579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.394597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.398760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.398810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.398832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.402974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.403038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.403056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.407229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.407302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.407321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.411467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.411537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.411556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.415967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.416041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.416060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.651 [2024-12-09 15:19:29.420615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.651 [2024-12-09 15:19:29.420666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.651 [2024-12-09 15:19:29.420684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.652 [2024-12-09 15:19:29.425588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.652 [2024-12-09 15:19:29.425638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.652 [2024-12-09 15:19:29.425657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.652 [2024-12-09 15:19:29.430877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.652 [2024-12-09 15:19:29.430931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.652 [2024-12-09 15:19:29.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.652 [2024-12-09 15:19:29.436459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.652 [2024-12-09 15:19:29.436530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.652 [2024-12-09 15:19:29.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.652 [2024-12-09 15:19:29.441672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.652 [2024-12-09 15:19:29.441731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.652 [2024-12-09 15:19:29.441752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.447103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.447157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.447179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.452186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.452272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.452294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.456917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.456983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.457002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.461492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.461554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.461573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.465954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.466018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.466037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.470210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.470275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.470293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.474669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.474749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.479137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.479198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.479222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.483760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.483811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.483829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.488404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.488496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.488515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.493022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.493099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.493118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.497541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.497594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.497613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.502291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.502346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.502364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.506820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.506880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.506899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.511412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.511483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.515840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.515899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.515917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.519994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.520065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.520088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.524454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.524524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.524543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.528933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.529013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.529031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.533082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.533155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.533174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.537192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.537272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.537290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.541355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.541422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.541441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.545493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.545567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.911 [2024-12-09 15:19:29.545585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.911 [2024-12-09 15:19:29.549600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.911 [2024-12-09 15:19:29.549677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.549697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.553803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.553863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.553882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.557909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.557982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.558001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.562067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.562125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.562143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.566165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.566226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.570447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.570505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.570524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.575054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.575126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.575145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.579188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.579251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.579286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.583332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.583425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.583444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.587806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.587888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.587907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.592373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.592430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.592449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.596486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.596540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.596559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.600606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.600677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.600696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.604717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.604769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.604803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.608847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.608924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.608942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.612998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.613075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.613094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.617112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.617186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.617205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.621253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.621324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.621342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.625959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.626064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.626083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.631632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.631810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.631832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.637576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.637699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.637718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.642405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.642500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.642518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.647896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.647962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.647981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.653624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.653696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.653715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.658867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.658965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.658984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.664163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.664234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.664269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.669332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.669448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.669478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.674631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.912 [2024-12-09 15:19:29.674733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.912 [2024-12-09 15:19:29.674752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.912 [2024-12-09 15:19:29.679943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.913 [2024-12-09 15:19:29.680075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.913 [2024-12-09 15:19:29.680093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.913 [2024-12-09 15:19:29.685881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.913 [2024-12-09 15:19:29.686022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.913 [2024-12-09 15:19:29.686040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.913 [2024-12-09 15:19:29.692961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.913 [2024-12-09 15:19:29.693057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.913 [2024-12-09 15:19:29.693076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.913 [2024-12-09 15:19:29.699973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:27.913 [2024-12-09 15:19:29.700156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.913 [2024-12-09 15:19:29.700175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.706501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.706602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.712698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.712787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.712807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.718691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.718762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.718782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.723663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.723723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.723742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.728486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.728564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.728583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.733714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.733767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.733786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.738698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.738770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.738789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.744097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.744166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.744184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.749287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.749339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.749357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.754433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.754503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.754522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.759758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.759876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.759895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.764781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.764837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.769845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.769906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.769924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.775165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.775242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.775264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.780180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.780251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.780268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.785360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.785428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.785446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.790340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.790410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.790427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.795980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.796073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.800653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.800726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.800745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.805493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.810373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.810450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.815329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.815412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.820351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.820426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.820445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.825670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.825726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.825744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.830317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.830380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.830398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.834961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.835015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.835034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.172 [2024-12-09 15:19:29.839440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.172 [2024-12-09 15:19:29.839558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.172 [2024-12-09 15:19:29.839577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.844982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.845073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.845091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.849900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.849966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.849984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.855207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.855358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.855377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.860203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.860262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.860296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.865386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.865463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.865493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.870244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.870313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.875523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.875573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.875591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.880565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.880620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.880639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.885024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.885089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.885108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.889388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.889479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.889498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.893674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.893804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.898130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.898209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.898233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.902661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.902719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.902741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.907009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.907097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.907116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.911471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.911537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.911556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.915624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.915684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.915702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.920018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.920097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.920115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.924377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.924440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.924458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.929081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.929175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.929193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.934188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.934258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.934293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.939016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.939102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.939120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.944145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.944214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.944238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.949238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.949357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.949375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.953992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.954063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.954081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.958820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.958910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.958929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.173 [2024-12-09 15:19:29.963360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.173 [2024-12-09 15:19:29.963419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.173 [2024-12-09 15:19:29.963445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.967878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.967958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.967980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.973032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.973105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.973127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.977696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.977754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.977774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.982108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.982163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.982182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.986416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.986484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.986503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.990574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.990630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.990649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.994890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.994956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.994976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:29.999351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:29.999421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:29.999440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.004028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.004107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.004130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.009359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.009429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.009452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.014600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.014668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.014688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.020013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.020148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.020168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.025525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.025590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.025617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.030481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.030541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.030563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.035412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.035497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.035518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.040286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.040364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.040384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.433 [2024-12-09 15:19:30.044766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.433 [2024-12-09 15:19:30.044870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.433 [2024-12-09 15:19:30.044889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.049092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.049169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.049187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.053661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.053768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.053789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.058062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.058133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.058152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.062405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.062508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.062528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.066771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.066845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.066865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.071191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.071257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.071277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.075705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.075759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.075777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.079963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.080015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.080034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.084167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.088643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.088694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.088712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.093072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.093197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.093215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.098710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.098788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.103743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.103799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.103818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.108441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.108493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.108512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.112920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.112989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.113009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.117542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.117596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.117615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.122148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.122209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.122234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.126442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.126495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.126514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.130683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.130743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.130762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.134968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.135029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.135049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.139324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.139392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.139412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.143585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.143714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.143736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.147845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.147899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.147919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.152089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.152155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.152173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.156713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.156766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.156785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.161505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.161558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.161576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.166488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.434 [2024-12-09 15:19:30.166551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.434 [2024-12-09 15:19:30.166569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.434 [2024-12-09 15:19:30.171749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.171879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.171897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.176625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.176696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.176715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.181886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.181951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.181969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.187290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.187376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.187394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.192488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.192564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.192583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.196930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.197012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.201281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.201337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.201356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.205540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.205616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.205634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.210198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.210326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.210346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.215762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.215916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.215935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.435 [2024-12-09 15:19:30.222150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.435 [2024-12-09 15:19:30.222244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.435 [2024-12-09 15:19:30.222266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.227652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.227748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.227770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.232432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.232499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.232521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.237229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.237307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.237326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.241619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.241693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.241713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.245849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.245915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.245934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.250055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.250133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.250152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.254294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.254364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.254383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.258507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.258599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.262760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.262849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.262867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.266963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.267029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.267051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.271153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.271228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.271247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.275323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.275392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.275411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.279495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.279567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.279586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.283611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.283694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.283713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.287775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.287837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.287856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.291954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.292037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.292056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.296410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.296488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.296507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.300619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.300677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.300696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.304743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.304830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.304848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.308855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.308932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.308951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.313004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.313081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.313099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.317117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.317192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.317211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.321296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.321357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.321377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.325630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.325715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.325735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.329960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.330024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.695 [2024-12-09 15:19:30.330043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.695 [2024-12-09 15:19:30.334323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.695 [2024-12-09 15:19:30.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.334443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.338842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.338897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.338915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.343895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.343961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.343980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.349719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.349881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.349899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.356863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.357024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 6500.00 IOPS, 812.50 MiB/s [2024-12-09T14:19:30.491Z] [2024-12-09 15:19:30.364483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.364644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.364663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.371302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.371449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.371468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.378126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.378292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.378313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.384491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.384601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.384620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.389324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.389391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.389410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.393807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.393882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.393905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.398278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.398330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.398349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.402791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.402854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.402873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.407198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.407271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.407290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.411669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.411741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.411760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.416148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.416224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.416243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.420545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.420662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.420681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.424961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.425030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.425049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.429593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.429720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.429738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.433962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.434016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.438338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.438402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.438421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.442973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.443025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.443044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.447678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.447781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.447802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.452180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.452253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.456724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.456795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.456824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.461335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.461442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.461460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.465765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.465834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.465853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.470227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.470294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.696 [2024-12-09 15:19:30.470312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.696 [2024-12-09 15:19:30.474987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.696 [2024-12-09 15:19:30.475057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.697 [2024-12-09 15:19:30.475076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.697 [2024-12-09 15:19:30.479596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.697 [2024-12-09 15:19:30.479708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.697 [2024-12-09 15:19:30.479726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.697 [2024-12-09 15:19:30.484577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.697 [2024-12-09 15:19:30.484629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.697 [2024-12-09 15:19:30.484650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.489853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.489912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.489935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.494807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.494877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.494899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.500311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.500379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.500398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.504862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.504935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.504954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.509256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.509331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.513531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.513585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.513608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.518089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.518169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.518188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.522852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.522914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.522933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.527289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.527345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.527364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.531778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.531842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.531862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.536402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.536458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.536477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.540767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.540838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.540856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.545231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.545302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.545320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.549467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.549521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.549540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.553870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.553947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.553966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.558644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.558745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.558764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.564674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.564767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.570608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.570713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.570732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.576865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.577032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.577051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.583952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.584080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.584099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.588892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.588951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.588970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.593419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.593475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.593494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.598013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.598082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.598101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.602857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.602914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.602933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.956 [2024-12-09 15:19:30.607512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.956 [2024-12-09 15:19:30.607584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.956 [2024-12-09 15:19:30.607603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.612009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.612062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.612081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.616259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.616330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.616349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.620683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.620750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.620769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.625250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.625350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.625369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.631669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.631721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.631741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.637046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.637190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.637209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.643906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.644078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.644101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.651108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.651240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.651259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.658514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.658690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.658709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.665635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.665716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.665735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.673196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.673345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.673364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.680449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.680646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.687265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.687442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.687461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.694534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.694705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.694723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.702507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.702694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.702712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.709274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.709437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.709456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.716684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.716843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.716862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.723573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.723732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.723750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.730454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.730581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.736238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.736310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.736330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.742015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.742123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.742142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.957 [2024-12-09 15:19:30.747854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:28.957 [2024-12-09 15:19:30.747939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.957 [2024-12-09 15:19:30.747960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.754554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.754664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.759444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.759509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.759528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.764180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.764267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.768471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.768542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.768562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.772842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.772918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.772937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.776930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.777000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.781012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.781080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.781098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.785094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.785165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.785183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.789161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.789247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.789266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.793121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.793175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.793194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.796931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.797004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.797027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.800689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.800757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.800776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.804506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.804624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.804642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.808271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.808338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.812099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.812155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.812173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.816443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.816519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.816539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.821026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.821098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.821117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.825809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.825863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.825881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.829831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.829884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.829902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.833785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.833875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.837752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.837827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.837846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.841616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.841670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.841689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.845541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.845626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.845645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.849697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.849776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.849795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.854182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.217 [2024-12-09 15:19:30.854266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.217 [2024-12-09 15:19:30.854285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.217 [2024-12-09 15:19:30.858741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.858814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.863022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.863103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.867406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.867457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.867474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.871247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.871312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.871330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.875054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.875126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.875144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.878822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.878880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.878898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.882718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.882770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.882788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.886711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.886803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.886822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.890578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.890690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.890709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.894397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.894493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.894511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.898249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.898314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.898334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.902100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.902156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.905966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.906075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.906094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.909753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.909868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.909887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.913649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.913705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.913724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.917960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.918059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.918078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.922458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.922510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.922530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.926914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.927044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.927063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.931796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.931848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.931866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.936790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.936867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.936885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.941006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.941074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.944971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.945023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.945041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.948828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.948885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.948905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.952858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.952926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.952945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.956842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.956907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.960734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.960785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.960804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.964694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.964804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.218 [2024-12-09 15:19:30.964822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.218 [2024-12-09 15:19:30.968583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.218 [2024-12-09 15:19:30.968679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.968698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.972616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.972669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.972687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.976547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.976674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.976692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.980326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.980392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.980410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.984094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.984173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.984191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.987869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.987940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.991727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.991792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.991810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:30.995921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:30.995979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:30.995997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:31.000365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:31.000441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:31.000459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.219 [2024-12-09 15:19:31.005172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.219 [2024-12-09 15:19:31.005252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.219 [2024-12-09 15:19:31.005271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.011054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.011255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.011280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.017050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.017168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.017189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.023373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.023539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.023558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.029993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.030170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.030189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.036244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.036350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.036369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.042524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.042729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.049234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.049338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.049356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.055772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.055885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.055904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.062076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.062264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.062283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.068711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.068823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.068841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.074942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.075120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.075138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.081505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.081616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.081635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.088314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.088482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.088500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.094347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.094457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.478 [2024-12-09 15:19:31.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.478 [2024-12-09 15:19:31.100757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.478 [2024-12-09 15:19:31.100951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.100977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.107685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.107768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.107787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.113777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.113880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.113899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.119814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.125744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.125914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.131919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.132094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.132112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.137908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.138075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.138094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.143284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.143421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.143439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.148977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.149089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.149108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.154533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.154697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.154716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.159094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.159184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.159203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.162998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.163086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.163105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.167083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.167178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.167200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.171248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.171345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.171364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.175205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.175325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.175343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.179148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.179238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.179256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.183297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.183402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.187238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.187342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.187360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.191147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.191275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.191293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.195772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.195891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.201026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.201108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.201126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.204939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.205047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.205065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.208876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.208947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.208966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.212727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.212825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.212843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.216841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.216961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.220790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.220875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.220893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.224637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.224763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.224781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.479 [2024-12-09 15:19:31.228416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.479 [2024-12-09 15:19:31.228548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.479 [2024-12-09 15:19:31.228567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.232517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.232602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.232621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.236457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.236583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.240378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.240458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.244324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.244412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.244430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.248232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.248344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.248362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.252249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.252376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.256398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.256500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.256518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.260368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.260504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.264419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.264519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.264537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.480 [2024-12-09 15:19:31.268703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.480 [2024-12-09 15:19:31.268797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.480 [2024-12-09 15:19:31.268818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.273317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.273372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.273396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.278405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.278541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.283733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.283917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.283936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.290246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.290409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.290427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.295788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.295955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.295974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.738 [2024-12-09 15:19:31.301229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.738 [2024-12-09 15:19:31.301422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.738 [2024-12-09 15:19:31.301447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.306866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.307059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.307077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.312953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.313048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.313066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.318565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.318761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.318780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.324669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.324844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.324862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.331245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.331327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.331346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.336092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.336151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.336170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.340668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.340748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.340766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.344915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.345008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.345025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.348970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.349047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.349065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.352939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.352990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.353008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.356816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.356889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.356908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.739 [2024-12-09 15:19:31.360695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.360806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.360824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.739 6427.50 IOPS, 803.44 MiB/s [2024-12-09T14:19:31.534Z] [2024-12-09 15:19:31.365375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2154ba0) with pdu=0x200016efef90 00:26:29.739 [2024-12-09 15:19:31.365424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.739 [2024-12-09 15:19:31.365443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.739 00:26:29.739 Latency(us) 00:26:29.739 [2024-12-09T14:19:31.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.739 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:29.739 nvme0n1 : 2.00 6426.37 803.30 0.00 0.00 2485.27 1505.77 13169.62 00:26:29.739 [2024-12-09T14:19:31.534Z] =================================================================================================================== 00:26:29.739 [2024-12-09T14:19:31.534Z] Total : 6426.37 803.30 0.00 0.00 2485.27 1505.77 13169.62 00:26:29.739 { 00:26:29.739 "results": [ 00:26:29.739 { 00:26:29.739 "job": "nvme0n1", 00:26:29.739 "core_mask": "0x2", 00:26:29.739 "workload": "randwrite", 00:26:29.739 "status": "finished", 00:26:29.739 "queue_depth": 16, 00:26:29.739 "io_size": 131072, 00:26:29.739 "runtime": 2.003465, 00:26:29.739 "iops": 6426.366320349994, 00:26:29.739 "mibps": 803.2957900437492, 00:26:29.739 "io_failed": 0, 00:26:29.739 "io_timeout": 0, 00:26:29.739 "avg_latency_us": 2485.271462524272, 00:26:29.739 "min_latency_us": 1505.767619047619, 00:26:29.739 "max_latency_us": 13169.615238095239 00:26:29.739 } 00:26:29.739 ], 00:26:29.739 "core_count": 1 00:26:29.739 } 00:26:29.739 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:29.739 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:29.739 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:29.739 | .driver_specific 00:26:29.739 | .nvme_error 00:26:29.739 | .status_code 00:26:29.739 | .command_transient_transport_error' 00:26:29.739 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1576817 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1576817 ']' 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1576817 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:29.997 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576817 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576817' 00:26:29.998 killing process with pid 1576817 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1576817 00:26:29.998 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.998 00:26:29.998 Latency(us) 00:26:29.998 [2024-12-09T14:19:31.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.998 [2024-12-09T14:19:31.793Z] =================================================================================================================== 00:26:29.998 [2024-12-09T14:19:31.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.998 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1576817 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1575176 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1575176 ']' 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1575176 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575176 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575176' 00:26:30.257 killing process with pid 1575176 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1575176 00:26:30.257 15:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1575176 00:26:30.257 00:26:30.257 real 0m13.961s 00:26:30.257 user 0m26.786s 00:26:30.257 sys 0m4.498s 00:26:30.257 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.257 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.257 ************************************ 00:26:30.257 END TEST nvmf_digest_error 00:26:30.257 ************************************ 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.515 rmmod nvme_tcp 00:26:30.515 rmmod nvme_fabrics 00:26:30.515 rmmod nvme_keyring 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:30.515 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1575176 ']' 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1575176 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1575176 ']' 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1575176 00:26:30.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1575176) - No such process 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1575176 is not found' 00:26:30.516 Process with pid 1575176 is not found 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.516 15:19:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.419 15:19:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.419 00:26:32.419 real 0m36.457s 00:26:32.419 user 0m55.622s 00:26:32.419 sys 0m13.568s 00:26:32.419 15:19:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.419 15:19:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.419 ************************************ 00:26:32.419 END TEST nvmf_digest 00:26:32.419 ************************************ 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.678 ************************************ 00:26:32.678 START TEST nvmf_bdevperf 00:26:32.678 ************************************ 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:32.678 * Looking for test storage... 00:26:32.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:32.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.678 --rc genhtml_branch_coverage=1 00:26:32.678 --rc genhtml_function_coverage=1 00:26:32.678 --rc genhtml_legend=1 00:26:32.678 --rc geninfo_all_blocks=1 00:26:32.678 --rc geninfo_unexecuted_blocks=1 00:26:32.678 00:26:32.678 ' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:32.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.678 --rc genhtml_branch_coverage=1 00:26:32.678 --rc genhtml_function_coverage=1 00:26:32.678 --rc genhtml_legend=1 00:26:32.678 --rc geninfo_all_blocks=1 00:26:32.678 --rc geninfo_unexecuted_blocks=1 00:26:32.678 00:26:32.678 ' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:32.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.678 --rc genhtml_branch_coverage=1 00:26:32.678 --rc genhtml_function_coverage=1 00:26:32.678 --rc genhtml_legend=1 00:26:32.678 --rc geninfo_all_blocks=1 00:26:32.678 --rc geninfo_unexecuted_blocks=1 00:26:32.678 00:26:32.678 ' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:32.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.678 --rc genhtml_branch_coverage=1 00:26:32.678 --rc genhtml_function_coverage=1 00:26:32.678 --rc genhtml_legend=1 00:26:32.678 --rc geninfo_all_blocks=1 00:26:32.678 --rc geninfo_unexecuted_blocks=1 00:26:32.678 00:26:32.678 ' 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.678 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.937 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.938 15:19:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:39.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:39.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:39.575 Found net devices under 0000:af:00.0: cvl_0_0 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:39.575 Found net devices under 0000:af:00.1: cvl_0_1 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.575 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:26:39.576 00:26:39.576 --- 10.0.0.2 ping statistics --- 00:26:39.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.576 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:26:39.576 00:26:39.576 --- 10.0.0.1 ping statistics --- 00:26:39.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.576 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1580795 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1580795 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1580795 ']' 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 [2024-12-09 15:19:40.447274] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:39.576 [2024-12-09 15:19:40.447322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.576 [2024-12-09 15:19:40.526913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.576 [2024-12-09 15:19:40.567518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.576 [2024-12-09 15:19:40.567554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.576 [2024-12-09 15:19:40.567561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.576 [2024-12-09 15:19:40.567567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.576 [2024-12-09 15:19:40.567572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.576 [2024-12-09 15:19:40.568932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.576 [2024-12-09 15:19:40.569038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.576 [2024-12-09 15:19:40.569040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 [2024-12-09 15:19:40.705546] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 Malloc0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:39.576 [2024-12-09 15:19:40.770621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:39.576 { 00:26:39.576 "params": { 00:26:39.576 "name": "Nvme$subsystem", 00:26:39.576 "trtype": "$TEST_TRANSPORT", 00:26:39.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.576 "adrfam": "ipv4", 00:26:39.576 "trsvcid": "$NVMF_PORT", 00:26:39.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.576 "hdgst": ${hdgst:-false}, 00:26:39.576 "ddgst": ${ddgst:-false} 00:26:39.576 }, 00:26:39.576 "method": "bdev_nvme_attach_controller" 00:26:39.576 } 00:26:39.576 EOF 00:26:39.576 )") 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:39.576 15:19:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:39.576 "params": { 00:26:39.576 "name": "Nvme1", 00:26:39.576 "trtype": "tcp", 00:26:39.576 "traddr": "10.0.0.2", 00:26:39.576 "adrfam": "ipv4", 00:26:39.576 "trsvcid": "4420", 00:26:39.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.576 "hdgst": false, 00:26:39.576 "ddgst": false 00:26:39.576 }, 00:26:39.576 "method": "bdev_nvme_attach_controller" 00:26:39.576 }' 00:26:39.576 [2024-12-09 15:19:40.823757] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:39.576 [2024-12-09 15:19:40.823810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580985 ] 00:26:39.576 [2024-12-09 15:19:40.901221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.577 [2024-12-09 15:19:40.941024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.577 Running I/O for 1 seconds... 00:26:40.511 11295.00 IOPS, 44.12 MiB/s 00:26:40.511 Latency(us) 00:26:40.511 [2024-12-09T14:19:42.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.511 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:40.511 Verification LBA range: start 0x0 length 0x4000 00:26:40.511 Nvme1n1 : 1.01 11361.07 44.38 0.00 0.00 11223.78 2371.78 12420.63 00:26:40.511 [2024-12-09T14:19:42.306Z] =================================================================================================================== 00:26:40.511 [2024-12-09T14:19:42.306Z] Total : 11361.07 44.38 0.00 0.00 11223.78 2371.78 12420.63 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1581264 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:40.769 { 00:26:40.769 "params": { 00:26:40.769 "name": "Nvme$subsystem", 00:26:40.769 "trtype": "$TEST_TRANSPORT", 00:26:40.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.769 "adrfam": "ipv4", 00:26:40.769 "trsvcid": "$NVMF_PORT", 00:26:40.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.769 "hdgst": ${hdgst:-false}, 00:26:40.769 "ddgst": ${ddgst:-false} 00:26:40.769 }, 00:26:40.769 "method": "bdev_nvme_attach_controller" 00:26:40.769 } 00:26:40.769 EOF 00:26:40.769 )") 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:40.769 15:19:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:40.769 "params": { 00:26:40.769 "name": "Nvme1", 00:26:40.769 "trtype": "tcp", 00:26:40.769 "traddr": "10.0.0.2", 00:26:40.769 "adrfam": "ipv4", 00:26:40.769 "trsvcid": "4420", 00:26:40.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:40.769 "hdgst": false, 00:26:40.769 "ddgst": false 00:26:40.769 }, 00:26:40.769 "method": "bdev_nvme_attach_controller" 00:26:40.769 }' 00:26:40.769 [2024-12-09 15:19:42.357892] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:40.769 [2024-12-09 15:19:42.357942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581264 ] 00:26:40.769 [2024-12-09 15:19:42.430946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.769 [2024-12-09 15:19:42.468583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.028 Running I/O for 15 seconds... 00:26:42.896 11406.00 IOPS, 44.55 MiB/s [2024-12-09T14:19:45.626Z] 11528.50 IOPS, 45.03 MiB/s [2024-12-09T14:19:45.626Z] 15:19:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1580795 00:26:43.831 15:19:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:43.831 [2024-12-09 15:19:45.328667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.831 [2024-12-09 15:19:45.328920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.831 [2024-12-09 15:19:45.328930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.328940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.328949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.328957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.328966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.328974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.328984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.832 [2024-12-09 15:19:45.329488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.832 [2024-12-09 15:19:45.329502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.832 [2024-12-09 15:19:45.329518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.832 [2024-12-09 15:19:45.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.832 [2024-12-09 15:19:45.329540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.832 [2024-12-09 15:19:45.329547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.329989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.329997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.833 [2024-12-09 15:19:45.330117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.833 [2024-12-09 15:19:45.330125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.834 [2024-12-09 15:19:45.330726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.330734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd7f0 is same with the state(6) to be set 00:26:43.834 [2024-12-09 15:19:45.330742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:43.834 [2024-12-09 15:19:45.330748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:43.834 [2024-12-09 15:19:45.330754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113088 len:8 PRP1 0x0 PRP2 0x0 00:26:43.834 [2024-12-09 15:19:45.330761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.834 [2024-12-09 15:19:45.333569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.834 [2024-12-09 15:19:45.333621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.834 [2024-12-09 15:19:45.334198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.834 [2024-12-09 15:19:45.334214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.834 [2024-12-09 15:19:45.334229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.834 [2024-12-09 15:19:45.334404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.834 [2024-12-09 15:19:45.334577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.834 [2024-12-09 15:19:45.334585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.834 [2024-12-09 15:19:45.334592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.334599] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.346829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.347253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.347302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.347328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.347913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.348203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.348211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.348224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.348231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.359865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.360283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.360329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.360353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.360935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.361306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.361315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.361321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.361327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.372698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.373094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.373111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.373118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.373303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.373471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.373480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.373486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.373492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.385466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.385880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.385897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.385904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.386072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.386249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.386257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.386263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.386269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.398336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.398743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.398787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.398810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.399282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.399451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.399459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.399469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.399476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.411190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.411626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.411643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.411650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.411818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.411986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.411995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.412001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.412007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.423916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.424313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.424330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.424337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.424496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.424656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.424664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.424670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.424675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.436727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.437140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.437156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.437164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.437338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.437507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.437515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.437522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.437528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.449537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.449889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.449934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.449956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.450440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.450609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.450617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.835 [2024-12-09 15:19:45.450624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.835 [2024-12-09 15:19:45.450630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.835 [2024-12-09 15:19:45.462363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.835 [2024-12-09 15:19:45.462778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.835 [2024-12-09 15:19:45.462794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.835 [2024-12-09 15:19:45.462801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.835 [2024-12-09 15:19:45.462969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.835 [2024-12-09 15:19:45.463137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.835 [2024-12-09 15:19:45.463146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.463152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.463158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.475237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.475712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.475734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.476295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.476455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.476483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.476497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.476511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.490116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.490614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.490661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.490692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.491290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.491717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.491729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.491738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.491747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.503087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.503467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.503484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.503491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.503659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.503827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.503836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.503842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.503848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.515855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.516262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.516307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.516331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.516912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.517358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.517376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.517390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.517403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.530638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.531149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.531193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.531233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.531670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.531929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.531941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.531950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.531959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.543649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.544050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.544067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.544074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.544248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.544416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.544424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.544430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.544436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.556450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.556792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.556815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.556982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.557150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.557158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.557164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.557170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.569343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.569756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.569772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.836 [2024-12-09 15:19:45.569779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.836 [2024-12-09 15:19:45.569946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.836 [2024-12-09 15:19:45.570114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.836 [2024-12-09 15:19:45.570123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.836 [2024-12-09 15:19:45.570132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.836 [2024-12-09 15:19:45.570138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.836 [2024-12-09 15:19:45.582173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.836 [2024-12-09 15:19:45.582618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.836 [2024-12-09 15:19:45.582635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.837 [2024-12-09 15:19:45.582643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.837 [2024-12-09 15:19:45.582815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.837 [2024-12-09 15:19:45.582990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.837 [2024-12-09 15:19:45.582998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.837 [2024-12-09 15:19:45.583005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.837 [2024-12-09 15:19:45.583011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.837 [2024-12-09 15:19:45.595267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.837 [2024-12-09 15:19:45.595700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.837 [2024-12-09 15:19:45.595717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.837 [2024-12-09 15:19:45.595725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.837 [2024-12-09 15:19:45.595897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.837 [2024-12-09 15:19:45.596070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.837 [2024-12-09 15:19:45.596078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.837 [2024-12-09 15:19:45.596084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.837 [2024-12-09 15:19:45.596090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.837 [2024-12-09 15:19:45.608353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.837 [2024-12-09 15:19:45.608695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.837 [2024-12-09 15:19:45.608711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.837 [2024-12-09 15:19:45.608718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.837 [2024-12-09 15:19:45.608885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.837 [2024-12-09 15:19:45.609053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.837 [2024-12-09 15:19:45.609063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.837 [2024-12-09 15:19:45.609069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.837 [2024-12-09 15:19:45.609075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:43.837 [2024-12-09 15:19:45.621439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:43.837 [2024-12-09 15:19:45.621803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.837 [2024-12-09 15:19:45.621820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:43.837 [2024-12-09 15:19:45.621827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:43.837 [2024-12-09 15:19:45.621999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:43.837 [2024-12-09 15:19:45.622173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:43.837 [2024-12-09 15:19:45.622182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:43.837 [2024-12-09 15:19:45.622189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:43.837 [2024-12-09 15:19:45.622195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.100 [2024-12-09 15:19:45.634382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.100 [2024-12-09 15:19:45.634744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.100 [2024-12-09 15:19:45.634761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.100 [2024-12-09 15:19:45.634768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.100 [2024-12-09 15:19:45.634941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.100 [2024-12-09 15:19:45.635115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.100 [2024-12-09 15:19:45.635123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.100 [2024-12-09 15:19:45.635130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.100 [2024-12-09 15:19:45.635136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.100 [2024-12-09 15:19:45.647203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.100 [2024-12-09 15:19:45.647664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.100 [2024-12-09 15:19:45.647711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.100 [2024-12-09 15:19:45.647737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.100 [2024-12-09 15:19:45.648127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.100 [2024-12-09 15:19:45.648494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.100 [2024-12-09 15:19:45.648513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.100 [2024-12-09 15:19:45.648527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.100 [2024-12-09 15:19:45.648541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.100 [2024-12-09 15:19:45.662092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.100 [2024-12-09 15:19:45.662626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.100 [2024-12-09 15:19:45.662683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.100 [2024-12-09 15:19:45.662715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.100 [2024-12-09 15:19:45.663316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.100 [2024-12-09 15:19:45.663897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.100 [2024-12-09 15:19:45.663908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.100 [2024-12-09 15:19:45.663918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.100 [2024-12-09 15:19:45.663927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.100 [2024-12-09 15:19:45.675009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.100 [2024-12-09 15:19:45.675441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.100 [2024-12-09 15:19:45.675459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.100 [2024-12-09 15:19:45.675466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.100 [2024-12-09 15:19:45.675635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.100 [2024-12-09 15:19:45.675803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.100 [2024-12-09 15:19:45.675811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.100 [2024-12-09 15:19:45.675817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.100 [2024-12-09 15:19:45.675823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.100 10131.00 IOPS, 39.57 MiB/s [2024-12-09T14:19:45.895Z] [2024-12-09 15:19:45.688616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.100 [2024-12-09 15:19:45.688974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.688991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.688998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.689172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.689352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.689361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.689367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.689373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.701453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.701845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.701861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.701868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.702027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.702191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.702199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.702205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.702211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.714293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.714708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.714725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.714732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.714891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.715049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.715057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.715063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.715069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.727051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.727467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.727491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.727659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.727827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.727836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.727842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.727848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.740033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.740463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.740479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.740486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.740654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.740821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.740829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.740838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.740845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.752839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.753278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.753296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.753303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.753478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.753637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.753645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.753651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.753656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.765568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.765970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.765986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.765993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.766152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.766337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.766346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.766352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.101 [2024-12-09 15:19:45.766358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.101 [2024-12-09 15:19:45.778394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.101 [2024-12-09 15:19:45.778849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.101 [2024-12-09 15:19:45.778895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.101 [2024-12-09 15:19:45.778919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.101 [2024-12-09 15:19:45.779394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.101 [2024-12-09 15:19:45.779563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.101 [2024-12-09 15:19:45.779571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.101 [2024-12-09 15:19:45.779577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.779583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.793393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.102 [2024-12-09 15:19:45.793948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.102 [2024-12-09 15:19:45.793993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.102 [2024-12-09 15:19:45.794017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.102 [2024-12-09 15:19:45.794480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.102 [2024-12-09 15:19:45.794736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.102 [2024-12-09 15:19:45.794747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.102 [2024-12-09 15:19:45.794757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.794766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.806378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.102 [2024-12-09 15:19:45.806802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.102 [2024-12-09 15:19:45.806818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.102 [2024-12-09 15:19:45.806825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.102 [2024-12-09 15:19:45.806992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.102 [2024-12-09 15:19:45.807160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.102 [2024-12-09 15:19:45.807168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.102 [2024-12-09 15:19:45.807174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.807180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.819182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.102 [2024-12-09 15:19:45.819512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.102 [2024-12-09 15:19:45.819527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.102 [2024-12-09 15:19:45.819534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.102 [2024-12-09 15:19:45.819693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.102 [2024-12-09 15:19:45.819852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.102 [2024-12-09 15:19:45.819860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.102 [2024-12-09 15:19:45.819866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.819872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.832003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.102 [2024-12-09 15:19:45.832413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.102 [2024-12-09 15:19:45.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.102 [2024-12-09 15:19:45.832438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.102 [2024-12-09 15:19:45.832619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.102 [2024-12-09 15:19:45.832787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.102 [2024-12-09 15:19:45.832795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.102 [2024-12-09 15:19:45.832801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.832807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.844923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.102 [2024-12-09 15:19:45.845352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.102 [2024-12-09 15:19:45.845369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.102 [2024-12-09 15:19:45.845376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.102 [2024-12-09 15:19:45.845545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.102 [2024-12-09 15:19:45.845713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.102 [2024-12-09 15:19:45.845721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.102 [2024-12-09 15:19:45.845727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.102 [2024-12-09 15:19:45.845733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.102 [2024-12-09 15:19:45.857952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.104 [2024-12-09 15:19:45.858376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.104 [2024-12-09 15:19:45.858393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.104 [2024-12-09 15:19:45.858400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.104 [2024-12-09 15:19:45.858569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.104 [2024-12-09 15:19:45.858737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.104 [2024-12-09 15:19:45.858745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.104 [2024-12-09 15:19:45.858752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.104 [2024-12-09 15:19:45.858758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.104 [2024-12-09 15:19:45.870829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.104 [2024-12-09 15:19:45.871229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.104 [2024-12-09 15:19:45.871246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.104 [2024-12-09 15:19:45.871253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.105 [2024-12-09 15:19:45.871412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.105 [2024-12-09 15:19:45.871573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.105 [2024-12-09 15:19:45.871582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.105 [2024-12-09 15:19:45.871587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.105 [2024-12-09 15:19:45.871593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.105 [2024-12-09 15:19:45.883673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.105 [2024-12-09 15:19:45.884092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.105 [2024-12-09 15:19:45.884108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.105 [2024-12-09 15:19:45.884115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.105 [2024-12-09 15:19:45.884296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.105 [2024-12-09 15:19:45.884464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.105 [2024-12-09 15:19:45.884472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.105 [2024-12-09 15:19:45.884478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.105 [2024-12-09 15:19:45.884484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.896503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.896920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.372 [2024-12-09 15:19:45.896936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.372 [2024-12-09 15:19:45.896942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.372 [2024-12-09 15:19:45.897101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.372 [2024-12-09 15:19:45.897282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.372 [2024-12-09 15:19:45.897290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.372 [2024-12-09 15:19:45.897297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.372 [2024-12-09 15:19:45.897303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.909243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.909632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.372 [2024-12-09 15:19:45.909676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.372 [2024-12-09 15:19:45.909699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.372 [2024-12-09 15:19:45.910291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.372 [2024-12-09 15:19:45.910460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.372 [2024-12-09 15:19:45.910468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.372 [2024-12-09 15:19:45.910477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.372 [2024-12-09 15:19:45.910484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.922099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.372 [2024-12-09 15:19:45.922452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.372 [2024-12-09 15:19:45.922459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.372 [2024-12-09 15:19:45.922626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.372 [2024-12-09 15:19:45.922795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.372 [2024-12-09 15:19:45.922804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.372 [2024-12-09 15:19:45.922810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.372 [2024-12-09 15:19:45.922816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.934885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.935243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.372 [2024-12-09 15:19:45.935260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.372 [2024-12-09 15:19:45.935267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.372 [2024-12-09 15:19:45.935434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.372 [2024-12-09 15:19:45.935601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.372 [2024-12-09 15:19:45.935612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.372 [2024-12-09 15:19:45.935618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.372 [2024-12-09 15:19:45.935624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.947764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.948120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.372 [2024-12-09 15:19:45.948137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.372 [2024-12-09 15:19:45.948144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.372 [2024-12-09 15:19:45.948316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.372 [2024-12-09 15:19:45.948485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.372 [2024-12-09 15:19:45.948494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.372 [2024-12-09 15:19:45.948500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.372 [2024-12-09 15:19:45.948506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.372 [2024-12-09 15:19:45.960601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.372 [2024-12-09 15:19:45.961025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:45.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:45.961049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:45.961223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:45.961392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:45.961400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:45.961406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:45.961412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:45.973430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:45.973772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:45.973788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:45.973795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:45.973954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:45.974118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:45.974126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:45.974132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:45.974137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:45.986295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:45.986730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:45.986774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:45.986796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:45.987393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:45.987755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:45.987763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:45.987769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:45.987775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:45.999081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:45.999525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:45.999570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:45.999600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.000184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.000781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.000795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.000802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.000808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.011850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.012197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.012214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.012227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.012395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.012562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.012570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.012577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.012583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.024635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.025047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.025063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.025070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.025236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.025421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.025429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.025435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.025441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.037499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.037902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.037918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.037925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.038083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.038247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.038258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.038264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.038270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.050241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.050653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.050669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.050675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.050834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.050992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.051000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.051006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.051012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.063086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.063497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.063513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.063520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.063688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.063855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.063863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.063870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.063875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.076015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.076459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.076506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.373 [2024-12-09 15:19:46.076530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.373 [2024-12-09 15:19:46.077043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.373 [2024-12-09 15:19:46.077212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.373 [2024-12-09 15:19:46.077225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.373 [2024-12-09 15:19:46.077231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.373 [2024-12-09 15:19:46.077241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.373 [2024-12-09 15:19:46.088849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.373 [2024-12-09 15:19:46.089211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.373 [2024-12-09 15:19:46.089231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.089238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.089426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.089599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.089608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.089614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.089621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.374 [2024-12-09 15:19:46.101812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.374 [2024-12-09 15:19:46.102233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.374 [2024-12-09 15:19:46.102250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.102257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.102425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.102592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.102601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.102607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.102612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.374 [2024-12-09 15:19:46.114677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.374 [2024-12-09 15:19:46.115088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.374 [2024-12-09 15:19:46.115132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.115155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.115753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.116194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.116202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.116208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.116214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.374 [2024-12-09 15:19:46.127565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.374 [2024-12-09 15:19:46.127924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.374 [2024-12-09 15:19:46.127940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.127947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.128115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.128288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.128296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.128303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.128309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.374 [2024-12-09 15:19:46.140626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.374 [2024-12-09 15:19:46.141030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.374 [2024-12-09 15:19:46.141047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.141054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.141232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.141407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.141415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.141422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.141428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.374 [2024-12-09 15:19:46.153662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.374 [2024-12-09 15:19:46.154092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.374 [2024-12-09 15:19:46.154112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.374 [2024-12-09 15:19:46.154120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.374 [2024-12-09 15:19:46.154297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.374 [2024-12-09 15:19:46.154471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.374 [2024-12-09 15:19:46.154480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.374 [2024-12-09 15:19:46.154487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.374 [2024-12-09 15:19:46.154493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.166745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.167179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.167360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.167533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.167541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.167548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.167555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.179875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.180228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.180236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.180420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.180603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.180612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.180618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.180625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.192978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.193302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.193320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.193327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.193500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.193673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.193681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.193687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.193693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.206049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.206479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.206496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.206504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.206687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.206871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.206883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.206889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.206896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.219030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.219411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.219441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.219448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.219621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.219793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.219802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.219808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.219814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.232268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.232720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.232727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.232911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.233098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.233107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.233114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.233120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.245419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.245864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.245881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.245889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.246073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.246261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.246270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.246277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.246287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.258781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.259149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.259166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.259174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.259362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.259547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.259556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.259563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.259570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.271818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.272269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.272287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.272295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.272485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.634 [2024-12-09 15:19:46.272658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.634 [2024-12-09 15:19:46.272667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.634 [2024-12-09 15:19:46.272673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.634 [2024-12-09 15:19:46.272679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.634 [2024-12-09 15:19:46.284916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.634 [2024-12-09 15:19:46.285338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.634 [2024-12-09 15:19:46.285356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.634 [2024-12-09 15:19:46.285363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.634 [2024-12-09 15:19:46.285547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.285730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.285739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.285745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.285752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.298122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.298587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.298595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.298779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.298963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.298972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.298979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.298985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.311246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.311580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.311597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.311605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.311788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.311975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.311984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.311990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.311997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.324284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.324752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.324768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.324776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.324949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.325122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.325130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.325137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.325143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.337458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.337821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.337837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.337844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.338032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.338215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.338230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.338237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.338244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.350883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.351328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.351346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.351354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.351550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.351747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.351756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.351764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.351771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.363867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.364272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.364289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.364297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.364471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.364643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.364651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.364658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.364664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.376953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.377411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.377429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.377449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.377622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.377795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.377806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.377813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.377819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.390048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.390355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.390372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.390380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.390563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.390748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.390757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.390764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.390770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.403175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.403561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.403579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.403586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.403770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.403954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.403963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.635 [2024-12-09 15:19:46.403970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.635 [2024-12-09 15:19:46.403976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.635 [2024-12-09 15:19:46.416154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.635 [2024-12-09 15:19:46.416528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.635 [2024-12-09 15:19:46.416545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.635 [2024-12-09 15:19:46.416553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.635 [2024-12-09 15:19:46.416725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.635 [2024-12-09 15:19:46.416903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.635 [2024-12-09 15:19:46.416911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.636 [2024-12-09 15:19:46.416917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.636 [2024-12-09 15:19:46.416927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.896 [2024-12-09 15:19:46.429483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.896 [2024-12-09 15:19:46.429901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-12-09 15:19:46.429918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.896 [2024-12-09 15:19:46.429925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.896 [2024-12-09 15:19:46.430109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.896 [2024-12-09 15:19:46.430299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.896 [2024-12-09 15:19:46.430309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.896 [2024-12-09 15:19:46.430315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.896 [2024-12-09 15:19:46.430322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.896 [2024-12-09 15:19:46.442565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.896 [2024-12-09 15:19:46.443026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-12-09 15:19:46.443044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.896 [2024-12-09 15:19:46.443052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.896 [2024-12-09 15:19:46.443239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.896 [2024-12-09 15:19:46.443423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.896 [2024-12-09 15:19:46.443432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.896 [2024-12-09 15:19:46.443439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.896 [2024-12-09 15:19:46.443445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.896 [2024-12-09 15:19:46.455545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.896 [2024-12-09 15:19:46.455975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.896 [2024-12-09 15:19:46.455992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.896 [2024-12-09 15:19:46.455999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.896 [2024-12-09 15:19:46.456172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.896 [2024-12-09 15:19:46.456350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.896 [2024-12-09 15:19:46.456359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.456366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.456372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.468566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.468999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.469019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.469027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.469200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.469377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.469386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.469392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.469398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.481627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.482000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.482172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.482351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.482360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.482367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.482373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.494903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.495347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.495371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.495556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.495740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.495749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.495756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.495762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.507936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.508364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.508388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.508565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.508738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.508746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.508752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.508758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.520991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.521350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.521367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.521374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.521547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.521720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.521728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.521734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.521740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.534238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.534595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.534613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.534620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.534804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.534987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.534996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.535003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.535009] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.547330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.547780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.547797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.547805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.547989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.548172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.548184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.548191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.548197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.560271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.560643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.560660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.560667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.560840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.561014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.561022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.561029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.561035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.573199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.573655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.573701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.573725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.574211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.574384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.574393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.574399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.574405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.897 [2024-12-09 15:19:46.586019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.897 [2024-12-09 15:19:46.586434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.897 [2024-12-09 15:19:46.586451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.897 [2024-12-09 15:19:46.586458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.897 [2024-12-09 15:19:46.586616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.897 [2024-12-09 15:19:46.586775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.897 [2024-12-09 15:19:46.586783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.897 [2024-12-09 15:19:46.586788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.897 [2024-12-09 15:19:46.586794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.598852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.599246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.599262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.599269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.599429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.599587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.599595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.599601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.599606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.611702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.612138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.612155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.612162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.612353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.612527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.612535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.612541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.612547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.624736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.625093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.625109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.625117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.625306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.625479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.625487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.625494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.625500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.637724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.638147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.638167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.638174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.638348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.638515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.638523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.638529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.638535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.650545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.650975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.651023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.651047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.651541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.651711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.651721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.651727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.651734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.663362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.663778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.663823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.663848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.664374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.664544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.664552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.664558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.664564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.898 [2024-12-09 15:19:46.676121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.898 [2024-12-09 15:19:46.676535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.898 [2024-12-09 15:19:46.676552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:44.898 [2024-12-09 15:19:46.676559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:44.898 [2024-12-09 15:19:46.676730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:44.898 [2024-12-09 15:19:46.676899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.898 [2024-12-09 15:19:46.676908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.898 [2024-12-09 15:19:46.676914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.898 [2024-12-09 15:19:46.676920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.158 7598.25 IOPS, 29.68 MiB/s [2024-12-09T14:19:46.953Z] [2024-12-09 15:19:46.690475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.158 [2024-12-09 15:19:46.690911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-09 15:19:46.690956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.158 [2024-12-09 15:19:46.690980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.691480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.691649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.691657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.691663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.691669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.703264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.703673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.703690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.703697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.703865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.704033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.704041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.704047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.704053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.716058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.716494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.716539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.716562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.717146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.717622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.717630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.717640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.717646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.728834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.729170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.729234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.729259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.729843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.730442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.730478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.730493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.730507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.743883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.744392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.744437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.744460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.745044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.745577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.745589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.745598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.745607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.756853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.757258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.757293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.757318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.757900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.758501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.758527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.758548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.758567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.769746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.770168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.770213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.770252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.770696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.770865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.770874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.770880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.770886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.782606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.782939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.782956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.782963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.783130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.783305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.783313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.783320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.783325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.795386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.795745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.795790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.795813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.796411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.796627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.796635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.796641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.796647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.808155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.808548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.808568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.159 [2024-12-09 15:19:46.808575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.159 [2024-12-09 15:19:46.808734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.159 [2024-12-09 15:19:46.808893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.159 [2024-12-09 15:19:46.808901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.159 [2024-12-09 15:19:46.808906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.159 [2024-12-09 15:19:46.808912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.159 [2024-12-09 15:19:46.820885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.159 [2024-12-09 15:19:46.821297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-09 15:19:46.821314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.821321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.821488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.821658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.821665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.821671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.821677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.833651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.834043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.834059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.834066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.834231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.834415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.834422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.834429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.834434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.846484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.846893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.846909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.846916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.847085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.847261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.847270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.847276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.847282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.859270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.859664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.859680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.859687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.859855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.860023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.860031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.860037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.860043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.872316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.872728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.872745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.872753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.872926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.873101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.873109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.873115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.873121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.885299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.885703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.885719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.885726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.885894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.886061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.886069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.886079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.886085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.898288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.898686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.898702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.898709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.898877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.899045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.899053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.899059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.899065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.911129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.911542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.911559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.911566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.911735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.911902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.911910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.911916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.911922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.923989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.924374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.924391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.924398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.924557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.924716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.924724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.924730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.924735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.936793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.937184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.937207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.937396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.160 [2024-12-09 15:19:46.937564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.160 [2024-12-09 15:19:46.937572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.160 [2024-12-09 15:19:46.937579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.160 [2024-12-09 15:19:46.937584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.160 [2024-12-09 15:19:46.949941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.160 [2024-12-09 15:19:46.950372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-09 15:19:46.950389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.160 [2024-12-09 15:19:46.950396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.160 [2024-12-09 15:19:46.950568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.161 [2024-12-09 15:19:46.950742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.161 [2024-12-09 15:19:46.950750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.161 [2024-12-09 15:19:46.950756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.161 [2024-12-09 15:19:46.950763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:46.962814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:46.963176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:46.963193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:46.963201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:46.963379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:46.963548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:46.963556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:46.963562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:46.963568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:46.975575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:46.975974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:46.976017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:46.976053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:46.976653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:46.977193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:46.977201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:46.977207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:46.977213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:46.988390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:46.988808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:46.988825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:46.988832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:46.989000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:46.989170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:46.989179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:46.989186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:46.989192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.001263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.001676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.001692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.001699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.001867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.002034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.002042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.002048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.002054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.014007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.014422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.014446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.014614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.014784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.014792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.014798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.014804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.026858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.027273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.027290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.027297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.027464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.027632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.027640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.027646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.027652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.039706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.040095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.040110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.040117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.040300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.040469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.040477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.040484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.040489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.052486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.052901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.052918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.052925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.053093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.053266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.053275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.053285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.053291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.065357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.065744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.065760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.065767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.421 [2024-12-09 15:19:47.065926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.421 [2024-12-09 15:19:47.066084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.421 [2024-12-09 15:19:47.066091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.421 [2024-12-09 15:19:47.066097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.421 [2024-12-09 15:19:47.066103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.421 [2024-12-09 15:19:47.078182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.421 [2024-12-09 15:19:47.078582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.421 [2024-12-09 15:19:47.078599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.421 [2024-12-09 15:19:47.078606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.078773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.078941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.078948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.078954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.078960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.091017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.091424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.091470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.091493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.092079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.092497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.092505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.092512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.092517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.103810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.104239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.104285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.104308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.104891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.105408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.105417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.105423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.105429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.116582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.116985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.117030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.117053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.117519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.117687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.117695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.117701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.117707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.129404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.129820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.129843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.130011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.130179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.130187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.130193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.130199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.142422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.142834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.142850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.142861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.143028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.143195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.143204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.143209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.143215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.155270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.155639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.155654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.155661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.155819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.155977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.155985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.155991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.155997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.168123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.168534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.168551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.168558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.168726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.168893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.168901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.168907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.168913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.180983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.181371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.181387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.181394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.181553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.181713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.181721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.181727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.181733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.193786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.194199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.194215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.194229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.422 [2024-12-09 15:19:47.194397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.422 [2024-12-09 15:19:47.194564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.422 [2024-12-09 15:19:47.194572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.422 [2024-12-09 15:19:47.194578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.422 [2024-12-09 15:19:47.194584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.422 [2024-12-09 15:19:47.206747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.422 [2024-12-09 15:19:47.207168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.422 [2024-12-09 15:19:47.207213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.422 [2024-12-09 15:19:47.207254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.423 [2024-12-09 15:19:47.207671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.423 [2024-12-09 15:19:47.207840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.423 [2024-12-09 15:19:47.207848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.423 [2024-12-09 15:19:47.207854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.423 [2024-12-09 15:19:47.207860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.682 [2024-12-09 15:19:47.219642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.682 [2024-12-09 15:19:47.220068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.682 [2024-12-09 15:19:47.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.682 [2024-12-09 15:19:47.220092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.682 [2024-12-09 15:19:47.220275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.682 [2024-12-09 15:19:47.220448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.682 [2024-12-09 15:19:47.220457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.682 [2024-12-09 15:19:47.220467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.682 [2024-12-09 15:19:47.220473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.682 [2024-12-09 15:19:47.232380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.682 [2024-12-09 15:19:47.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.682 [2024-12-09 15:19:47.232847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.682 [2024-12-09 15:19:47.232870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.682 [2024-12-09 15:19:47.233469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.682 [2024-12-09 15:19:47.233669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.682 [2024-12-09 15:19:47.233678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.682 [2024-12-09 15:19:47.233685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.682 [2024-12-09 15:19:47.233691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.682 [2024-12-09 15:19:47.245156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.245583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.245652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.246252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.246644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.246661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.246675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.246688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.260108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.260633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.260678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.260701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.261296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.261858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.261869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.261878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.261887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.273164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.273592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.273608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.273615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.273783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.273951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.273959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.273966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.273971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.285924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.286331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.286348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.286355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.286523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.286691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.286698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.286704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.286710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.298751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.299165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.299182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.299189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.299363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.299531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.299540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.299546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.299552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.311611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.312021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.312037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.312047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.312215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.312390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.312398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.312404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.312410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.324459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.324872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.324888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.324895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.325063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.325237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.325245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.325251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.325257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.337305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.337687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.337732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.337755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.338215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.338405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.338413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.338418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.338424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.350130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.350520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.350536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.350543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.350711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.350881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.350890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.350896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.350902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.683 [2024-12-09 15:19:47.362912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.683 [2024-12-09 15:19:47.363310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.683 [2024-12-09 15:19:47.363326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.683 [2024-12-09 15:19:47.363333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.683 [2024-12-09 15:19:47.363491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.683 [2024-12-09 15:19:47.363650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.683 [2024-12-09 15:19:47.363658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.683 [2024-12-09 15:19:47.363664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.683 [2024-12-09 15:19:47.363670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.375719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.376107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.376124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.376131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.376316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.376485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.376493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.376499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.376505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.388481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.388929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.388937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.389110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.389291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.389300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.389308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.389321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.401502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.401917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.401987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.402584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.403179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.403188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.403194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.403200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.414444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.414849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.414865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.414872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.415040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.415209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.415223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.415229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.415236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.427289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.427599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.427615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.427621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.427780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.427938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.427946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.427952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.427957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.440074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.440481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.440496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.440503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.440670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.440837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.440845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.440851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.440857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.452946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.453364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.453409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.453432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.454015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.454440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.454449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.454455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.454461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.684 [2024-12-09 15:19:47.465738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.684 [2024-12-09 15:19:47.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.684 [2024-12-09 15:19:47.466158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.684 [2024-12-09 15:19:47.466164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.684 [2024-12-09 15:19:47.466350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.684 [2024-12-09 15:19:47.466518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.684 [2024-12-09 15:19:47.466526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.684 [2024-12-09 15:19:47.466533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.684 [2024-12-09 15:19:47.466539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.944 [2024-12-09 15:19:47.478734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.944 [2024-12-09 15:19:47.479141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-12-09 15:19:47.479186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.944 [2024-12-09 15:19:47.479210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.944 [2024-12-09 15:19:47.479835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.944 [2024-12-09 15:19:47.480420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.944 [2024-12-09 15:19:47.480429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.944 [2024-12-09 15:19:47.480435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.944 [2024-12-09 15:19:47.480442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.944 [2024-12-09 15:19:47.491543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.944 [2024-12-09 15:19:47.491957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-12-09 15:19:47.491973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.944 [2024-12-09 15:19:47.491981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.944 [2024-12-09 15:19:47.492149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.944 [2024-12-09 15:19:47.492323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.944 [2024-12-09 15:19:47.492332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.944 [2024-12-09 15:19:47.492338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.944 [2024-12-09 15:19:47.492344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.944 [2024-12-09 15:19:47.504577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.944 [2024-12-09 15:19:47.504978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-12-09 15:19:47.504994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.944 [2024-12-09 15:19:47.505001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.944 [2024-12-09 15:19:47.505169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.944 [2024-12-09 15:19:47.505344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.944 [2024-12-09 15:19:47.505353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.944 [2024-12-09 15:19:47.505359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.944 [2024-12-09 15:19:47.505365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.944 [2024-12-09 15:19:47.517340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.944 [2024-12-09 15:19:47.517726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-12-09 15:19:47.517742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.944 [2024-12-09 15:19:47.517748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.944 [2024-12-09 15:19:47.517907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.944 [2024-12-09 15:19:47.518065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.944 [2024-12-09 15:19:47.518076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.944 [2024-12-09 15:19:47.518082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.944 [2024-12-09 15:19:47.518087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.944 [2024-12-09 15:19:47.530166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.944 [2024-12-09 15:19:47.530586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-12-09 15:19:47.530631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.944 [2024-12-09 15:19:47.530654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.944 [2024-12-09 15:19:47.531250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.944 [2024-12-09 15:19:47.531602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.944 [2024-12-09 15:19:47.531610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.531616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.531622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.543175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.543581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.543598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.543606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.543779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.543951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.543960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.543966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.543972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.556202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.556629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.556646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.556653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.556825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.557001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.557009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.557016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.557025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.569253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.569616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.569632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.569640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.569813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.569988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.570004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.570011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.570018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.582285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.582642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.582659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.582668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.582841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.583016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.583024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.583032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.583038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.595447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.595868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.595876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.596048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.596234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.596243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.596250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.596256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.608281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.608676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.608692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.608698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.608857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.609017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.609025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.609031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.609037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.621119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.621468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.621485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.621492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.621659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.621828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.621836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.621842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.621848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.633939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.634308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.634325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.634332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.634500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.634667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.634675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.634681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.634687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.646803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.647248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.647266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.647274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.647450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.647624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.647632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.647639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.647645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.945 [2024-12-09 15:19:47.659859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.945 [2024-12-09 15:19:47.660289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-12-09 15:19:47.660335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.945 [2024-12-09 15:19:47.660359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.945 [2024-12-09 15:19:47.660620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.945 [2024-12-09 15:19:47.660794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.945 [2024-12-09 15:19:47.660802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.945 [2024-12-09 15:19:47.660810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.945 [2024-12-09 15:19:47.660826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.946 [2024-12-09 15:19:47.674679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.946 [2024-12-09 15:19:47.675136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-12-09 15:19:47.675159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.946 [2024-12-09 15:19:47.675169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.946 [2024-12-09 15:19:47.675431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.946 [2024-12-09 15:19:47.675688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.946 [2024-12-09 15:19:47.675699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.946 [2024-12-09 15:19:47.675709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.946 [2024-12-09 15:19:47.675718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.946 [2024-12-09 15:19:47.687813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.946 [2024-12-09 15:19:47.688250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-12-09 15:19:47.688268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.946 [2024-12-09 15:19:47.688276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.946 [2024-12-09 15:19:47.688449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.946 [2024-12-09 15:19:47.688623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.946 [2024-12-09 15:19:47.688634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.946 [2024-12-09 15:19:47.688641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.946 [2024-12-09 15:19:47.688647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.946 6078.60 IOPS, 23.74 MiB/s [2024-12-09T14:19:47.741Z] [2024-12-09 15:19:47.700911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.946 [2024-12-09 15:19:47.701366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-12-09 15:19:47.701384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.946 [2024-12-09 15:19:47.701392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.946 [2024-12-09 15:19:47.701587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.946 [2024-12-09 15:19:47.701761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.946 [2024-12-09 15:19:47.701769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.946 [2024-12-09 15:19:47.701776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.946 [2024-12-09 15:19:47.701782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.946 [2024-12-09 15:19:47.714020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.946 [2024-12-09 15:19:47.714363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-12-09 15:19:47.714380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.946 [2024-12-09 15:19:47.714387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.946 [2024-12-09 15:19:47.714560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.946 [2024-12-09 15:19:47.714737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.946 [2024-12-09 15:19:47.714746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.946 [2024-12-09 15:19:47.714752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.946 [2024-12-09 15:19:47.714758] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.946 [2024-12-09 15:19:47.726999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.946 [2024-12-09 15:19:47.727443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-12-09 15:19:47.727461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:45.946 [2024-12-09 15:19:47.727468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:45.946 [2024-12-09 15:19:47.727641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:45.946 [2024-12-09 15:19:47.727813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.946 [2024-12-09 15:19:47.727822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.946 [2024-12-09 15:19:47.727828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.946 [2024-12-09 15:19:47.727838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.206 [2024-12-09 15:19:47.740187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.206 [2024-12-09 15:19:47.740619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-12-09 15:19:47.740636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.206 [2024-12-09 15:19:47.740644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.206 [2024-12-09 15:19:47.740816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.206 [2024-12-09 15:19:47.740989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.206 [2024-12-09 15:19:47.740997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.206 [2024-12-09 15:19:47.741003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.206 [2024-12-09 15:19:47.741010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.206 [2024-12-09 15:19:47.753237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.206 [2024-12-09 15:19:47.753638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-12-09 15:19:47.753655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.206 [2024-12-09 15:19:47.753663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.206 [2024-12-09 15:19:47.753846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.206 [2024-12-09 15:19:47.754030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.206 [2024-12-09 15:19:47.754038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.206 [2024-12-09 15:19:47.754045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.206 [2024-12-09 15:19:47.754051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.206 [2024-12-09 15:19:47.766323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.206 [2024-12-09 15:19:47.766687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-12-09 15:19:47.766704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.206 [2024-12-09 15:19:47.766712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.206 [2024-12-09 15:19:47.766895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.206 [2024-12-09 15:19:47.767080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.206 [2024-12-09 15:19:47.767088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.206 [2024-12-09 15:19:47.767095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.206 [2024-12-09 15:19:47.767101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.206 [2024-12-09 15:19:47.779421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.206 [2024-12-09 15:19:47.779826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-12-09 15:19:47.779843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.206 [2024-12-09 15:19:47.779850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.206 [2024-12-09 15:19:47.780023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.206 [2024-12-09 15:19:47.780201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.206 [2024-12-09 15:19:47.780211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.206 [2024-12-09 15:19:47.780222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.206 [2024-12-09 15:19:47.780228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.792455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.792884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.792900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.792907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.793080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.793260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.793269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.793276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.793282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.805743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.806191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.806209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.806221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.806404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.806587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.806596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.806603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.806609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.818916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.819328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.819345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.819353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.819541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.819724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.819732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.819739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.819746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.831993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.832421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.832446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.832619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.832795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.832804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.832810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.832816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.845045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.845455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.845472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.845480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.845652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.845826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.845835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.845841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.845847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.858266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.858720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.858737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.858745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.858928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.859111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.859123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.859130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.859137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.871239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.871694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.871712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.871720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.871903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.872088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.872097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.872104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.872110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.884350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.884712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.884731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.884739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.884922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.885106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.885115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.885122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.885129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.897412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.897868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.897876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.898060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.898249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.898258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.898265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.898275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.910571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.207 [2024-12-09 15:19:47.911017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-12-09 15:19:47.911034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.207 [2024-12-09 15:19:47.911041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.207 [2024-12-09 15:19:47.911230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.207 [2024-12-09 15:19:47.911415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.207 [2024-12-09 15:19:47.911424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.207 [2024-12-09 15:19:47.911430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.207 [2024-12-09 15:19:47.911437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.207 [2024-12-09 15:19:47.923741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.924162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.924179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.924187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.924376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.924560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.924569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.924576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.924583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.208 [2024-12-09 15:19:47.936974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.937411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.937429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.937437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.937620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.937808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.937816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.937823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.937830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.208 [2024-12-09 15:19:47.949990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.950447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.950480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.950488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.950661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.950835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.950843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.950849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.950855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.208 [2024-12-09 15:19:47.963000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.963452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.963469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.963476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.963649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.963821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.963829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.963835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.963842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.208 [2024-12-09 15:19:47.976068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.976451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.976468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.976475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.976647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.976820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.976828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.976834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.976840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.208 [2024-12-09 15:19:47.988983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.208 [2024-12-09 15:19:47.989346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-12-09 15:19:47.989392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.208 [2024-12-09 15:19:47.989416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.208 [2024-12-09 15:19:47.990013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.208 [2024-12-09 15:19:47.990579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.208 [2024-12-09 15:19:47.990589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.208 [2024-12-09 15:19:47.990596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.208 [2024-12-09 15:19:47.990602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.001946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.002334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.002350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.002357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.002526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.002694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.002702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.002708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.002714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.014773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.015191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.015207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.015214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.015386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.015554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.015562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.015568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.015574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.027641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.028072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.028088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.028095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.028269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.028437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.028448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.028455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.028461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.040652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.041075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.041091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.041098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.041271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.041440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.041448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.041454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.041460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.053514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.053925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.053941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.053947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.054106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.054270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.054279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.054285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.054291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.066270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.066580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.066596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.066603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.066761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.066920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.066928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.066934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.066940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.079062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.079512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.079528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.079535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.469 [2024-12-09 15:19:48.079703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.469 [2024-12-09 15:19:48.079871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.469 [2024-12-09 15:19:48.079879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.469 [2024-12-09 15:19:48.079885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.469 [2024-12-09 15:19:48.079891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.469 [2024-12-09 15:19:48.091802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.469 [2024-12-09 15:19:48.092110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.469 [2024-12-09 15:19:48.092126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.469 [2024-12-09 15:19:48.092133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.092317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.092485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.092493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.092499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.092505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.104557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.104943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.104959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.104965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.105124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.105289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.105298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.105304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.105309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.117429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.117843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.117862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.117868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.118027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.118186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.118194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.118200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.118205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.130164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.130500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.130515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.130522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.130682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.130840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.130848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.130854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.130860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.142984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.143400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.143417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.143424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.143583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.143742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.143750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.143756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.143761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.155813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.156244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.156261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.156268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.156439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.156642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.156650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.156656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.156662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.168857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.169288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.169305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.169312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.169481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.169648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.169656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.169662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.169668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.181708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.182120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.182170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.182194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.182793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.183292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.183300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.183306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.183313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.194712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.195122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.195138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.195146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.195323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.470 [2024-12-09 15:19:48.195496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.470 [2024-12-09 15:19:48.195504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.470 [2024-12-09 15:19:48.195514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.470 [2024-12-09 15:19:48.195520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.470 [2024-12-09 15:19:48.207738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.470 [2024-12-09 15:19:48.208168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.470 [2024-12-09 15:19:48.208184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.470 [2024-12-09 15:19:48.208192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.470 [2024-12-09 15:19:48.208371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.471 [2024-12-09 15:19:48.208545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.471 [2024-12-09 15:19:48.208553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.471 [2024-12-09 15:19:48.208560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.471 [2024-12-09 15:19:48.208566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.471 [2024-12-09 15:19:48.220648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.471 [2024-12-09 15:19:48.221039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-12-09 15:19:48.221055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.471 [2024-12-09 15:19:48.221062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.471 [2024-12-09 15:19:48.221237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.471 [2024-12-09 15:19:48.221405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.471 [2024-12-09 15:19:48.221413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.471 [2024-12-09 15:19:48.221419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.471 [2024-12-09 15:19:48.221425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.471 [2024-12-09 15:19:48.233395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.471 [2024-12-09 15:19:48.233810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-12-09 15:19:48.233826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.471 [2024-12-09 15:19:48.233833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.471 [2024-12-09 15:19:48.233992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.471 [2024-12-09 15:19:48.234151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.471 [2024-12-09 15:19:48.234158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.471 [2024-12-09 15:19:48.234164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.471 [2024-12-09 15:19:48.234170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.471 [2024-12-09 15:19:48.246255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.471 [2024-12-09 15:19:48.246665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-12-09 15:19:48.246681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.471 [2024-12-09 15:19:48.246688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.471 [2024-12-09 15:19:48.246847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.471 [2024-12-09 15:19:48.247005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.471 [2024-12-09 15:19:48.247013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.471 [2024-12-09 15:19:48.247019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.471 [2024-12-09 15:19:48.247024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.471 [2024-12-09 15:19:48.259194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.471 [2024-12-09 15:19:48.259615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.471 [2024-12-09 15:19:48.259632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.471 [2024-12-09 15:19:48.259640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.471 [2024-12-09 15:19:48.259813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.471 [2024-12-09 15:19:48.259986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.471 [2024-12-09 15:19:48.259994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.471 [2024-12-09 15:19:48.260001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.471 [2024-12-09 15:19:48.260007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.272053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.272412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.272429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.272436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.272604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.272771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.272779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.272786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.272792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.284870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.285239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.285258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.285268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.285437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.285605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.285614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.285620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.285626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.297640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.298058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.298074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.298082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.298256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.298424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.298432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.298439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.298444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.310392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.310749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.310786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.310812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.311363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.311533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.311541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.311547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.311553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.323168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.323537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.323554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.323562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1580795 Killed "${NVMF_APP[@]}" "$@" 00:26:46.732 [2024-12-09 15:19:48.323729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.323897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.323906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.323912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.323918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1582181 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1582181 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1582181 ']' 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.732 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.732 [2024-12-09 15:19:48.336212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.336488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.336506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.336513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.336692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.336866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.336874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.336880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.336886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.349301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.349643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.349659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.349666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.349842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.350015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.732 [2024-12-09 15:19:48.350023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.732 [2024-12-09 15:19:48.350030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.732 [2024-12-09 15:19:48.350036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.732 [2024-12-09 15:19:48.362268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.732 [2024-12-09 15:19:48.362669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.732 [2024-12-09 15:19:48.362686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.732 [2024-12-09 15:19:48.362693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.732 [2024-12-09 15:19:48.362867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.732 [2024-12-09 15:19:48.363040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.363049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.363055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.363062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.375264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.375707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.375724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.375732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.375905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.376079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.376087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.376093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.376099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.381682] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:26:46.733 [2024-12-09 15:19:48.381720] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.733 [2024-12-09 15:19:48.388292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.388724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.388741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.388753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.388927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.389101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.389109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.389116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.389122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.401281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.401696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.401713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.401721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.401894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.402069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.402077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.402084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.402090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.414389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.414747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.414764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.414772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.414944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.415116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.415125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.415131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.415138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.427375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.427777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.427793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.427801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.427974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.428145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.428156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.428163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.428170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.440362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.440785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.440802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.440809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.440982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.441158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.441166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.441174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.441180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.453333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.453787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.453804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.453811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.453984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.454158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.454166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.454172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.454178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.460785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.733 [2024-12-09 15:19:48.466341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.466795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.466812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.466820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.466994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.467167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.467175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.467186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.467192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.479368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.479803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.479820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.479828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.480001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.733 [2024-12-09 15:19:48.480174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.733 [2024-12-09 15:19:48.480183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.733 [2024-12-09 15:19:48.480189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.733 [2024-12-09 15:19:48.480195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.733 [2024-12-09 15:19:48.492302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.733 [2024-12-09 15:19:48.492750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.733 [2024-12-09 15:19:48.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.733 [2024-12-09 15:19:48.492774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.733 [2024-12-09 15:19:48.492948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.734 [2024-12-09 15:19:48.493121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.734 [2024-12-09 15:19:48.493129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.734 [2024-12-09 15:19:48.493136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.734 [2024-12-09 15:19:48.493143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.734 [2024-12-09 15:19:48.501345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.734 [2024-12-09 15:19:48.501369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.734 [2024-12-09 15:19:48.501376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.734 [2024-12-09 15:19:48.501382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.734 [2024-12-09 15:19:48.501387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.734 [2024-12-09 15:19:48.502706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.734 [2024-12-09 15:19:48.502820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.734 [2024-12-09 15:19:48.502822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.734 [2024-12-09 15:19:48.505373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.734 [2024-12-09 15:19:48.505819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.734 [2024-12-09 15:19:48.505839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.734 [2024-12-09 15:19:48.505851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.734 [2024-12-09 15:19:48.506025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.734 [2024-12-09 15:19:48.506199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.734 [2024-12-09 15:19:48.506208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.734 [2024-12-09 15:19:48.506214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.734 [2024-12-09 15:19:48.506226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.734 [2024-12-09 15:19:48.518473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.734 [2024-12-09 15:19:48.518932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.734 [2024-12-09 15:19:48.518953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.734 [2024-12-09 15:19:48.518961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.734 [2024-12-09 15:19:48.519137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.734 [2024-12-09 15:19:48.519316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.734 [2024-12-09 15:19:48.519326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.734 [2024-12-09 15:19:48.519332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.734 [2024-12-09 15:19:48.519339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.993 [2024-12-09 15:19:48.531589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.993 [2024-12-09 15:19:48.532025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-09 15:19:48.532045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.993 [2024-12-09 15:19:48.532054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.993 [2024-12-09 15:19:48.532234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.993 [2024-12-09 15:19:48.532409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.993 [2024-12-09 15:19:48.532418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.993 [2024-12-09 15:19:48.532425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.993 [2024-12-09 15:19:48.532431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.993 [2024-12-09 15:19:48.544662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.993 [2024-12-09 15:19:48.545116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-09 15:19:48.545135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.993 [2024-12-09 15:19:48.545144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.993 [2024-12-09 15:19:48.545322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.993 [2024-12-09 15:19:48.545496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.993 [2024-12-09 15:19:48.545510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.993 [2024-12-09 15:19:48.545517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.993 [2024-12-09 15:19:48.545524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.993 [2024-12-09 15:19:48.557757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.993 [2024-12-09 15:19:48.558205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-09 15:19:48.558230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.993 [2024-12-09 15:19:48.558239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.993 [2024-12-09 15:19:48.558414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.993 [2024-12-09 15:19:48.558587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.993 [2024-12-09 15:19:48.558595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.993 [2024-12-09 15:19:48.558602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.993 [2024-12-09 15:19:48.558608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.993 [2024-12-09 15:19:48.570843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.993 [2024-12-09 15:19:48.571182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-09 15:19:48.571200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.993 [2024-12-09 15:19:48.571207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.993 [2024-12-09 15:19:48.571386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.993 [2024-12-09 15:19:48.571560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.993 [2024-12-09 15:19:48.571568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.993 [2024-12-09 15:19:48.571575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.993 [2024-12-09 15:19:48.571582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.993 [2024-12-09 15:19:48.583979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.993 [2024-12-09 15:19:48.584388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.584406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.584414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.584587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.584761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.584769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.584776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.584787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-09 15:19:48.597013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.597393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.597400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.597573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.597745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.597754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.597760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.597766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 [2024-12-09 15:19:48.609994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.610329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.610346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.610353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.610526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.610699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.610708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.610714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.610720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 [2024-12-09 15:19:48.623112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.623435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.623452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.623460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.623632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.623806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.623815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.623825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.623832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-09 15:19:48.636236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.636595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.636611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.636618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.636791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.636967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.636976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.636983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.636989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 [2024-12-09 15:19:48.638183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-09 15:19:48.649258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.649547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.649565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.649573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.649745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.649919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.649927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.649934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.649940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 [2024-12-09 15:19:48.662342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.662774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.662791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.662803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.662977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.663150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.663158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.663165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.663171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 Malloc0 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-09 15:19:48.675414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.675849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.675867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.994 [2024-12-09 15:19:48.675875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.994 [2024-12-09 15:19:48.676048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.994 [2024-12-09 15:19:48.676226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.994 [2024-12-09 15:19:48.676235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.994 [2024-12-09 15:19:48.676242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.994 [2024-12-09 15:19:48.676248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.994 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-09 15:19:48.688485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:46.994 [2024-12-09 15:19:48.688917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-09 15:19:48.688934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x807aa0 with addr=10.0.0.2, port=4420 00:26:46.995 [2024-12-09 15:19:48.688941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x807aa0 is same with the state(6) to be set 00:26:46.995 [2024-12-09 15:19:48.689114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x807aa0 (9): Bad file descriptor 00:26:46.995 [2024-12-09 15:19:48.689291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:46.995 [2024-12-09 15:19:48.689300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:46.995 [2024-12-09 15:19:48.689307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:46.995 [2024-12-09 15:19:48.689317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.995 5065.50 IOPS, 19.79 MiB/s [2024-12-09T14:19:48.790Z] [2024-12-09 15:19:48.694482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.995 15:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1581264 00:26:46.995 [2024-12-09 15:19:48.701567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:47.253 [2024-12-09 15:19:48.851882] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:49.127 5751.00 IOPS, 22.46 MiB/s [2024-12-09T14:19:51.857Z] 6470.62 IOPS, 25.28 MiB/s [2024-12-09T14:19:52.792Z] 7042.78 IOPS, 27.51 MiB/s [2024-12-09T14:19:53.724Z] 7482.40 IOPS, 29.23 MiB/s [2024-12-09T14:19:55.096Z] 7838.45 IOPS, 30.62 MiB/s [2024-12-09T14:19:56.027Z] 8142.75 IOPS, 31.81 MiB/s [2024-12-09T14:19:56.960Z] 8388.00 IOPS, 32.77 MiB/s [2024-12-09T14:19:57.893Z] 8600.14 IOPS, 33.59 MiB/s [2024-12-09T14:19:57.893Z] 8782.87 IOPS, 34.31 MiB/s 00:26:56.098 Latency(us) 00:26:56.098 [2024-12-09T14:19:57.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.098 Verification LBA range: start 0x0 length 0x4000 00:26:56.098 Nvme1n1 : 15.01 8787.60 34.33 11261.27 0.00 6364.77 473.97 14542.75 00:26:56.098 [2024-12-09T14:19:57.893Z] =================================================================================================================== 00:26:56.098 [2024-12-09T14:19:57.893Z] Total : 8787.60 34.33 11261.27 0.00 6364.77 473.97 14542.75 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:56.098 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.355 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:56.355 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.355 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.355 rmmod nvme_tcp 00:26:56.355 rmmod nvme_fabrics 00:26:56.355 rmmod nvme_keyring 00:26:56.355 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.355 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1582181 ']' 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1582181 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1582181 ']' 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1582181 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.356 15:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1582181 00:26:56.356 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:56.356 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:56.356 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1582181' 00:26:56.356 killing process with pid 1582181 00:26:56.356 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1582181 00:26:56.356 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1582181 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.614 15:19:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.516 00:26:58.516 real 0m25.978s 00:26:58.516 user 1m0.410s 00:26:58.516 sys 0m6.802s 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.516 ************************************ 00:26:58.516 END TEST nvmf_bdevperf 00:26:58.516 ************************************ 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.516 15:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.775 ************************************ 00:26:58.775 START TEST nvmf_target_disconnect 00:26:58.775 ************************************ 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:58.775 * Looking for test storage... 00:26:58.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.775 --rc genhtml_branch_coverage=1 00:26:58.775 --rc genhtml_function_coverage=1 00:26:58.775 --rc genhtml_legend=1 00:26:58.775 --rc geninfo_all_blocks=1 00:26:58.775 --rc geninfo_unexecuted_blocks=1 00:26:58.775 00:26:58.775 ' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.775 --rc genhtml_branch_coverage=1 00:26:58.775 --rc genhtml_function_coverage=1 00:26:58.775 --rc genhtml_legend=1 00:26:58.775 --rc geninfo_all_blocks=1 00:26:58.775 --rc geninfo_unexecuted_blocks=1 00:26:58.775 00:26:58.775 ' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.775 --rc genhtml_branch_coverage=1 00:26:58.775 --rc genhtml_function_coverage=1 00:26:58.775 --rc genhtml_legend=1 00:26:58.775 --rc geninfo_all_blocks=1 00:26:58.775 --rc geninfo_unexecuted_blocks=1 00:26:58.775 00:26:58.775 ' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.775 --rc genhtml_branch_coverage=1 00:26:58.775 --rc genhtml_function_coverage=1 00:26:58.775 --rc genhtml_legend=1 00:26:58.775 --rc geninfo_all_blocks=1 00:26:58.775 --rc geninfo_unexecuted_blocks=1 00:26:58.775 00:26:58.775 ' 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.775 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.776 15:20:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:05.344 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:05.344 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:05.344 Found net devices under 0000:af:00.0: cvl_0_0 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:05.344 Found net devices under 0000:af:00.1: cvl_0_1 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.344 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:27:05.345 00:27:05.345 --- 10.0.0.2 ping statistics --- 00:27:05.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.345 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:05.345 00:27:05.345 --- 10.0.0.1 ping statistics --- 00:27:05.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.345 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:05.345 ************************************ 00:27:05.345 START TEST nvmf_target_disconnect_tc1 00:27:05.345 ************************************ 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:05.345 [2024-12-09 15:20:06.572410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.345 [2024-12-09 15:20:06.572454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22bf410 with addr=10.0.0.2, port=4420 00:27:05.345 [2024-12-09 15:20:06.572477] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:05.345 [2024-12-09 15:20:06.572489] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:05.345 [2024-12-09 15:20:06.572495] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:05.345 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:05.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:05.345 Initializing NVMe Controllers 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.345 00:27:05.345 real 0m0.121s 00:27:05.345 user 0m0.057s 00:27:05.345 sys 0m0.064s 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.345 ************************************ 00:27:05.345 END TEST nvmf_target_disconnect_tc1 00:27:05.345 ************************************ 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:05.345 ************************************ 00:27:05.345 START TEST nvmf_target_disconnect_tc2 00:27:05.345 ************************************ 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1587295 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1587295 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1587295 ']' 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.345 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.345 [2024-12-09 15:20:06.712852] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:27:05.345 [2024-12-09 15:20:06.712896] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.345 [2024-12-09 15:20:06.792527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.345 [2024-12-09 15:20:06.833305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.345 [2024-12-09 15:20:06.833340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.345 [2024-12-09 15:20:06.833347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.345 [2024-12-09 15:20:06.833355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.345 [2024-12-09 15:20:06.833360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.346 [2024-12-09 15:20:06.834962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:05.346 [2024-12-09 15:20:06.834987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:05.346 [2024-12-09 15:20:06.835092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.346 [2024-12-09 15:20:06.835094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 Malloc0 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 [2024-12-09 15:20:06.997624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 [2024-12-09 15:20:07.026558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1587322 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:05.346 15:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:07.899 15:20:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1587295 00:27:07.899 15:20:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Read completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Write completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.899 Write completed with error (sct=0, sc=8) 00:27:07.899 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 [2024-12-09 15:20:09.054328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 [2024-12-09 15:20:09.054544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Read completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.900 Write completed with error (sct=0, sc=8) 00:27:07.900 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 [2024-12-09 15:20:09.054753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Read completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 Write completed with error (sct=0, sc=8) 00:27:07.901 starting I/O failed 00:27:07.901 [2024-12-09 15:20:09.054962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.901 [2024-12-09 15:20:09.055190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.055214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.055356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.055379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.055456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.055467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.055697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.055708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.055800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.055810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.056095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.056126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.056428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.056462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.056611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.901 [2024-12-09 15:20:09.056642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.901 qpair failed and we were unable to recover it. 00:27:07.901 [2024-12-09 15:20:09.056781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.056813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.057966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.057976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.058966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.059847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.059857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.060023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.060035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.902 qpair failed and we were unable to recover it. 00:27:07.902 [2024-12-09 15:20:09.060158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.902 [2024-12-09 15:20:09.060168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.060851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.060862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.061961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.061971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.062131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.062260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.062270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.062452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.062483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.062626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.062657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.062893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.062924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.063858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.063890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.064069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.903 [2024-12-09 15:20:09.064101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.903 qpair failed and we were unable to recover it. 00:27:07.903 [2024-12-09 15:20:09.064287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.064321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.064497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.064528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.064666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.064697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.064815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.064847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.065021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.065052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.065232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.065265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.065403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.065435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.065577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.065796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.065828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.066932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.066946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.067954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.067993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.068232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.068363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.068394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.068618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.068650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.068855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.068886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.069000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.904 [2024-12-09 15:20:09.069031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.904 qpair failed and we were unable to recover it. 00:27:07.904 [2024-12-09 15:20:09.069265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.069298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.069502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.069533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.069724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.069755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.069872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.069904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.070959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.070990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.071108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.071139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.071344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.071376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.071582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.071614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.071724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.071756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.071936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.071967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.072189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.072202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.072439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.072453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.072537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.072550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.072720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.072734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.072878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.072892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.073151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.073183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.073340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.073374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.073621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.073653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.073851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.073868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.074104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.074122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.074280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.905 [2024-12-09 15:20:09.074300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.905 qpair failed and we were unable to recover it. 00:27:07.905 [2024-12-09 15:20:09.074413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.074434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.074577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.074596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.074706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.074725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.074998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.075297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.075316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.075527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.075545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.075707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.075724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.075829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.075847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.076068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.076086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.076242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.076261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.076443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.076474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.076668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.076699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.076972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.077126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.077383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.077578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.077702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.077948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.077967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.078168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.078186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.078406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.906 [2024-12-09 15:20:09.078503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.906 [2024-12-09 15:20:09.078521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.906 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.078683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.078700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.078969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.079001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.079129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.079159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.079414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.079455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.079620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.079638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.079744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.079762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.079994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.080261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.080366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.080530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.080691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.080919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.080937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.081170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.081201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.081334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.081367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.081580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.081611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.081887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.081905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.082058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.082076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.082264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.082298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.082489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.082520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.082749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.082996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.083155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.083275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.083465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.083684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.083897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.907 [2024-12-09 15:20:09.083929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.907 qpair failed and we were unable to recover it. 00:27:07.907 [2024-12-09 15:20:09.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.084181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.084390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.084423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.084590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.084864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.084895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.085075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.085108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.085269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.085302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.085490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.085521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.085789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.085822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.086105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.086136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.086349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.086381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.086621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.086652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.086853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.086885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.087060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.087091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.087359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.087392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.087582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.087614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.087762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.087794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.087914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.087946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.088193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.088232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.088450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.088481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.088684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.088715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.088960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.088993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.089194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.089232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.089433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.089465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.089732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.089764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.089891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.089922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.090187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.908 [2024-12-09 15:20:09.090225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.908 qpair failed and we were unable to recover it. 00:27:07.908 [2024-12-09 15:20:09.090487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.090523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.090668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.090701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.091004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.091278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.091313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.091554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.091586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.091817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.091849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.092108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.092464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.092504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.092675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.092977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.093008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.093215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.093256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.093451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.093483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.093657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.093688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.093944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.093976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.094151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.094182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.094402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.094435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.094605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.094637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.094823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.094855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.095062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.095093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.095303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.095336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.095475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.095507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.095655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.095687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.095948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.095980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.096177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.096209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.096399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.096431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.096670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.909 [2024-12-09 15:20:09.096702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.909 qpair failed and we were unable to recover it. 00:27:07.909 [2024-12-09 15:20:09.096834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.096866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.097135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.097408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.097441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.097587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.097751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.097782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.097985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.098018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.098200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.098240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.098373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.098405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.098550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.098582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.098778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.098810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.099008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.099039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.099314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.099481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.099513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.099730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.099762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.100000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.100032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.100299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.100333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.100603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.100635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.100826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.100857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.100996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.101026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.101320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.101523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.101555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.101731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.101773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.102023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.102055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.102250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.102282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.102524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.102687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.102718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.910 qpair failed and we were unable to recover it. 00:27:07.910 [2024-12-09 15:20:09.102946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.910 [2024-12-09 15:20:09.102977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.103194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.103250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.103457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.103489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.103766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.103957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.103989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.104126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.104160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.104397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.104429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.104612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.104644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.104827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.104859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.105106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.105139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.105268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.105301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.105565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.105598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.105759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.105969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.106000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.106176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.106207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.106484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.106516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.106699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.106730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.106992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.107024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.107194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.107236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.107384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.107537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.107568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.107758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.107790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.108158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.108211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.108413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.108438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.108613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.108636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.108872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.108895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.911 qpair failed and we were unable to recover it. 00:27:07.911 [2024-12-09 15:20:09.109089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.911 [2024-12-09 15:20:09.109120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.109315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.109350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.109567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.109746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.109778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.109976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.110008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.110213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.110244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.110466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.110489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.110733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.110755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.111958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.111980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.112288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.112323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.112518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.112550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.112686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.112718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.112928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.112961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.113278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.113312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.113460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.113492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.113775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.113816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.113975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.113997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.114249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.114289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.114545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.114577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.114708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.114739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.912 [2024-12-09 15:20:09.114955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.912 [2024-12-09 15:20:09.114976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.912 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.115077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.115098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.115406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.115686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.115718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.115918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.115950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.116095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.116130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.116391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.116414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.116520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.116542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.116693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.116716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.116841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.116863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.118106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.118146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.118397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.118608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.118630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.118760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.118781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.119090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.119122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.119295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.119328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.119523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.119554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.119673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.119705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.119840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.119874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.120078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.120111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.120256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.120479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.120503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.913 qpair failed and we were unable to recover it. 00:27:07.913 [2024-12-09 15:20:09.120690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.913 [2024-12-09 15:20:09.120711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.120887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.120908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.121096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.121124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.121399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.121521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.121542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.121737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.121760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.122012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.122034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.122135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.122156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.122420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.122444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.122567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.122590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.122816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.122838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.123027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.123050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.123293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.123316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.123539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.123561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.123721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.123743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.123996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.124832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.124996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.125018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.125233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.125257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.125387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.125409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.125630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.125652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.125804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.125825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.126003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.914 [2024-12-09 15:20:09.126025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.914 qpair failed and we were unable to recover it. 00:27:07.914 [2024-12-09 15:20:09.126289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.126313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.126480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.126502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.126618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.126644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.126766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.126788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.127851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.127891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.128971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.128993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.129179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.129200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.129375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.129664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.129686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.130010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.130032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.130262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.130285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.130508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.130530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.130696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.130718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.130913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.130935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.131102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.131124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.131282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.131305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.915 qpair failed and we were unable to recover it. 00:27:07.915 [2024-12-09 15:20:09.131478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.915 [2024-12-09 15:20:09.131500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.131676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.131698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.131904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.131925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.132048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.132287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.132462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.132484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.132689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.132711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.132894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.132916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.133011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.133032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.133203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.133230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.133407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.133648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.133670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.133933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.133954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.134214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.134242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.134426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.134447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.134670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.134692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.134904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.135146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.135168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.135342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.135365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.135639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.135662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.135933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.135955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.136184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.136206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.136382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.136404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.136579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.136601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.136709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.136731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.136975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.136997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.916 [2024-12-09 15:20:09.137151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.916 [2024-12-09 15:20:09.137173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.916 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.137294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.137317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.137445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.137467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.137595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.137617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.137792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.137814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.137991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.138013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.138303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.138524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.138555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.138858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.138881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.139203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.139491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.139523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.139777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.139810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.140054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.140085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.140205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.140233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.140458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.140480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.140592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.140614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.140813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.141904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.141926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.142046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.142067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.142152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.142172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.142338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.142476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.142497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.917 qpair failed and we were unable to recover it. 00:27:07.917 [2024-12-09 15:20:09.142701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.917 [2024-12-09 15:20:09.142723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.142933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.142955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.143123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.143145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.143390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.143413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.143592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.143614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.143857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.143890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.144084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.144311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.144343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.144520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.144552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.144740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.145018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.145049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.145271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.145310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.145481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.145503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.145743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.145765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.146028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.146050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.146282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.146307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.146471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.146494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.146927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.146967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.147161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.147193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.147454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.147487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.147664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.147695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.148954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.148993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.149209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.918 [2024-12-09 15:20:09.149249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.918 qpair failed and we were unable to recover it. 00:27:07.918 [2024-12-09 15:20:09.149427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.149459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.149720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.149753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.149960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.149991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.150290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.150467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.150489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.150716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.150738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.150931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.151361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.151384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.151492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.151513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.151728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.151938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.151961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.152065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.152087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.152354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.152377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.152506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.152528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.152698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.152721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.152858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.152880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.153148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.153170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.153446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.153468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.153575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.153598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.153776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.153798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.154021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.154043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.154202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.154229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.154328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.154350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.154469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.919 [2024-12-09 15:20:09.154491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.919 qpair failed and we were unable to recover it. 00:27:07.919 [2024-12-09 15:20:09.154650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.154672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.154765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.154787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.154895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.154915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.155100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.155132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.155385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.155471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.155645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.155681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.156027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.156323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.156358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.156496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.156528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.156707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.156739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.157006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.157038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.157224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.157258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.157407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.157439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.157637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.157669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.157865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.157896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.158109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.158157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.158383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.158417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.158604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.158644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.158791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.158823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.159158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.159184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.159322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.159345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.159513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.159535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.159777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.160933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.160955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.920 [2024-12-09 15:20:09.161249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.920 [2024-12-09 15:20:09.161283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.920 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.161504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.161536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.161753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.161786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.162119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.162264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.162287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.162462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.162605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.162627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.162917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.163936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.163957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.164909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.164932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.165170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.165192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.165372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.165395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.165567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.165589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.165693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.165714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.165916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.165938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.166885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.921 qpair failed and we were unable to recover it. 00:27:07.921 [2024-12-09 15:20:09.166987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.921 [2024-12-09 15:20:09.167009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.167952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.168888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.168991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.169906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.169928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.170927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.170947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.171126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.171149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.171248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.171274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.171357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.922 qpair failed and we were unable to recover it. 00:27:07.922 [2024-12-09 15:20:09.171546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.922 [2024-12-09 15:20:09.171568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.171658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.171678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.171788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.171810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.171910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.171932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.172974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.172994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.173841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.173863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.174779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.175977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.923 [2024-12-09 15:20:09.175998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.923 qpair failed and we were unable to recover it. 00:27:07.923 [2024-12-09 15:20:09.176086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.176108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.176295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.176319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.176513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.176764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.176796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.176913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.176946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.177818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.177979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.178909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.178931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.179966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.179988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.180151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.180184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.180326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.180359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.180488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.180521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.180635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.924 [2024-12-09 15:20:09.180667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.924 qpair failed and we were unable to recover it. 00:27:07.924 [2024-12-09 15:20:09.180917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.180991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.181202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.181253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.181402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.181437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.181566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.181599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.181731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.181763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.181946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.181977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.182088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.182121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.182254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.182287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.182504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.182536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.182740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.182772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.182961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.183112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.183143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.183260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.183293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.183468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.183508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.183687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.183720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.183846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.183879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.184966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.184998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.185232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.185266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.185387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.185420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.185598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.185624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.185806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.185846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.186031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.186062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.925 qpair failed and we were unable to recover it. 00:27:07.925 [2024-12-09 15:20:09.186196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.925 [2024-12-09 15:20:09.186241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.186365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.186398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.186582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.186604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.186703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.186725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.186830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.186851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.187905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.187936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.188933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.188954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.189899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.189921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.190899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.190921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.191017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.191039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.191211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.191244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.191355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.191377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.191552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.191573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.926 qpair failed and we were unable to recover it. 00:27:07.926 [2024-12-09 15:20:09.191732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.926 [2024-12-09 15:20:09.191754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.191857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.191878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.191986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.192941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.192963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.193832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.193853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.194819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.194991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.195122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.195315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.195564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.195745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.195866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.195888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.196907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.196927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.927 [2024-12-09 15:20:09.197018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.927 [2024-12-09 15:20:09.197040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.927 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.197233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.197409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.197430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.197546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.197568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.197765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.197786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.197889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.197910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.198835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.198999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.199907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.200964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.200984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.201824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.201845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.202029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.928 [2024-12-09 15:20:09.202051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.928 qpair failed and we were unable to recover it. 00:27:07.928 [2024-12-09 15:20:09.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.202351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.202438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.202460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.202558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.202580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.202669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.202691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.202875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.202897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.203824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.203846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.204942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.204963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.205895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.205916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.206919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.206950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.207081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.207113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.207267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.207295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.207392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.929 [2024-12-09 15:20:09.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.929 qpair failed and we were unable to recover it. 00:27:07.929 [2024-12-09 15:20:09.207580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.207602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.207693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.207714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.207797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.207932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.207952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.208112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.208231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.208254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.208510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.208532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.208652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.208673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.208843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.208865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.209913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.209945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.210971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.210993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.211984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.212120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.212304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.212498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.212520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.212625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.930 [2024-12-09 15:20:09.212647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.930 qpair failed and we were unable to recover it. 00:27:07.930 [2024-12-09 15:20:09.212893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.212915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.213860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.213882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.214861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.215940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.215971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.216452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.216601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.216719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.216845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.217967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.217988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.218137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.218163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.218272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.218295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.931 qpair failed and we were unable to recover it. 00:27:07.931 [2024-12-09 15:20:09.218399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.931 [2024-12-09 15:20:09.218420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.218522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.218543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.218719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.218741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.218851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.219878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.219899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.220074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.220185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.220384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.220793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.220998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.221030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.221146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.221345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.221368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.221538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.221574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.221817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.221850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.221978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.222010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.222408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.222438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.222599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.222621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.222888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.222911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.223082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.223108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.223330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.223354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.223547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.223652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.223674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.223839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.223861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.224024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.224046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.224199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.224230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.224401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.224423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.224615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.224646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.224775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.224809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.225010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.225253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.225287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.225458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.932 [2024-12-09 15:20:09.225480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.932 qpair failed and we were unable to recover it. 00:27:07.932 [2024-12-09 15:20:09.225633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.225654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.225771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.225795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.225962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.225983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.226973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.226995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.227883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.227904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.228846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.228868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.229830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.229999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.230980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.231136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.231157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.231319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.231342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.933 [2024-12-09 15:20:09.231429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.933 [2024-12-09 15:20:09.231451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.933 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.231624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.231646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.231811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.231834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.231931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.231952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.232961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.232983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.233883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.234920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.234941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.235775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.235796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.934 qpair failed and we were unable to recover it. 00:27:07.934 [2024-12-09 15:20:09.236894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.934 [2024-12-09 15:20:09.236915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.237919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.237964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.238936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.238958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.239924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.239946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.240877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.240899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.935 [2024-12-09 15:20:09.241767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.935 [2024-12-09 15:20:09.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.935 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.242885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.242981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.243909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.244883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.244904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.245830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.245862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.246046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.246079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.246199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.246271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.246382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.246425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.246620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.246777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.246799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.936 [2024-12-09 15:20:09.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.936 [2024-12-09 15:20:09.247969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.936 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.249884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.249906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.250880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.250911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.251870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.251985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.252877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.252899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.253119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.253244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.253278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.253402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.253433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.253624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.253655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.253844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.253876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.254011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.937 [2024-12-09 15:20:09.254180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.937 qpair failed and we were unable to recover it. 00:27:07.937 [2024-12-09 15:20:09.254419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.254442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.254603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.254625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.254793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.254964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.254997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.255172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.255204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.255458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.255490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.255696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.255728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.255934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.255967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.256191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.256234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.256418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.256450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.256620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.256642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.256765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.256786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.256953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.256974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.257142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.257174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.257293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.257325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.257508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.257545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.257672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.257898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.257930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.258936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.258958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.259976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.259998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.260961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.261085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.261117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.261356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.261389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.261535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.261556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.261810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.261832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.261985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.938 [2024-12-09 15:20:09.262007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.938 qpair failed and we were unable to recover it. 00:27:07.938 [2024-12-09 15:20:09.262115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.262238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.262518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.262731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.262880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.262913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.263969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.263991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.264801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.264822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.265966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.265997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.266943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.267909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.267931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.268144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.268174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.939 [2024-12-09 15:20:09.268360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.939 qpair failed and we were unable to recover it. 00:27:07.939 [2024-12-09 15:20:09.268484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.268635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.268657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.268826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.268848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.268959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.268980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.269078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.269099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.269275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.269309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.269632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.269766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.269798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.270049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.270081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.270251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.270284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.270429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.270461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.270589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.270621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.270807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.270828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.271086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.271107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.271258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.271281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.271468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.271500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.271673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.271705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.271841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.271873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.272978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.272999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.273189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.273210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.273398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.273435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.273558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.273590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.273766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.273804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.273929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.273960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.274974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.274995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.275146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.275168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.275285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.275307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.275408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.275430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.275536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.275568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.275785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.275808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.940 [2024-12-09 15:20:09.276029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.940 [2024-12-09 15:20:09.276051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.940 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.276887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.276982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.277827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.277847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.278925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.278946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.279878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.279900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.280863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.281883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.281915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.282038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.282070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.282200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.941 [2024-12-09 15:20:09.282241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.941 qpair failed and we were unable to recover it. 00:27:07.941 [2024-12-09 15:20:09.282361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.282393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.282523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.282684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.282705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.282796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.282818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.282915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.282936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.283904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.283925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.284797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.284819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.285868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.285890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.286878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.286899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.287966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.287998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.288103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.288135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.288311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.288333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.288429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.942 [2024-12-09 15:20:09.288450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.942 qpair failed and we were unable to recover it. 00:27:07.942 [2024-12-09 15:20:09.288544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.288566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.288789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.288811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.288911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.288946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.289966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.289987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.290956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.290978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.291954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.291980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.292879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.292900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.293959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.293980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.294060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.294082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.294175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.943 [2024-12-09 15:20:09.294196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.943 qpair failed and we were unable to recover it. 00:27:07.943 [2024-12-09 15:20:09.294375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.294398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.294481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.294502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.294591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.294612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.294717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.294738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.294922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.295883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.295991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.296976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.296997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.297965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.297987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.298911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.298933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.299832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.299852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.300047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.300151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.300290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.300395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.944 [2024-12-09 15:20:09.300506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.944 qpair failed and we were unable to recover it. 00:27:07.944 [2024-12-09 15:20:09.300599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.300621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.300710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.300731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.300925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.300946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.301038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.301059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.301215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.301295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.301427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.301463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.301586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.301618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.301865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.301897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.302913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.302944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.303931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.303962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.304912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.305812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.305986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.306908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.306998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.945 [2024-12-09 15:20:09.307898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.945 [2024-12-09 15:20:09.307919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.945 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.308828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.308978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.309837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.309999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.310874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.310894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.311871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.311892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.312959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.313949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.313970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.946 [2024-12-09 15:20:09.314767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.946 [2024-12-09 15:20:09.314789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.946 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.314883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.314905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.315929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.315950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.316833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.316855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.317887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.317913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.318843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.318865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.319945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.319967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.320948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.320969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.321066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.321087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.321260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.321283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.947 qpair failed and we were unable to recover it. 00:27:07.947 [2024-12-09 15:20:09.321367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.947 [2024-12-09 15:20:09.321388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.321542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.321564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.321664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.321687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.321790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.321812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.321967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.321989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.322927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.322948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.323973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.323994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.324943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.324965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.325111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.325134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.325298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.325321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.325482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.325504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.325765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.325787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.325953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.325975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.326943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.327914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.327997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.328019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.328184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.328205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.328304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.328326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.328413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.328434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.328544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.948 [2024-12-09 15:20:09.328566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.948 qpair failed and we were unable to recover it. 00:27:07.948 [2024-12-09 15:20:09.328647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.328669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.328787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.328809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.328913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.328934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.329891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.329993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.330804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.330826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.331958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.331980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.332959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.332981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.333905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.333927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.334976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.334999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.335099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.335175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.335197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.949 qpair failed and we were unable to recover it. 00:27:07.949 [2024-12-09 15:20:09.335386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93460 is same with the state(6) to be set 00:27:07.949 [2024-12-09 15:20:09.335602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.949 [2024-12-09 15:20:09.335672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.335807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.335843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.336056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.336180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.336439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.336471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.336649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.336680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.336803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.336833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.337106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.337262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.337560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.337787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.337906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.337992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.338109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.338232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.338485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.338712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.338903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.338924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.339801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.340903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.340924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.341894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.341915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.342920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.342942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.343025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.343047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.343131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.343153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.343277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.950 [2024-12-09 15:20:09.343437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.950 [2024-12-09 15:20:09.343459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.950 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.343557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.343578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.343685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.343706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.343950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.343972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.344959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.345876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.345897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.346882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.346905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.347811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.347832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.951 [2024-12-09 15:20:09.348807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.951 [2024-12-09 15:20:09.348829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.951 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.348993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.349954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.350905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.350991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.351859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.351880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.352956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.352979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.353230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.353252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.353413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.353435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.353549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.353699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.353721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.353907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.353928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.354902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.354923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.355036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.355207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.355406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.355534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.952 [2024-12-09 15:20:09.355723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.952 qpair failed and we were unable to recover it. 00:27:07.952 [2024-12-09 15:20:09.355826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.355848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.356941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.356962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.357892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.357988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.358925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.359811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.359983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.360934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.360956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.361931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.361952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.362963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.362985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.363150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.363322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.953 [2024-12-09 15:20:09.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.953 qpair failed and we were unable to recover it. 00:27:07.953 [2024-12-09 15:20:09.363421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.363442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.363600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.363623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.363773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.363794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.363903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.363925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.364892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.364987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.365829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.365850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.366947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.366969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.367939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.367960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.368893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.368915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.369921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.369943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.370026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.370047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.370195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.954 [2024-12-09 15:20:09.370226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.954 qpair failed and we were unable to recover it. 00:27:07.954 [2024-12-09 15:20:09.370328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.370350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.370517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.370538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.370699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.370720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.370803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.370824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.370936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.371923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.372885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.372986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.373940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.373961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.374900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.374921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.375964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.376096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.376248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.376270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.376368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.376389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.376492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.955 [2024-12-09 15:20:09.376514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.955 qpair failed and we were unable to recover it. 00:27:07.955 [2024-12-09 15:20:09.376608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.376631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.376783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.376806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.376893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.376914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.377961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.377982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.378948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.378972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.379865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.379888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.380886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.380913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.381891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.381914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.956 [2024-12-09 15:20:09.382971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.956 [2024-12-09 15:20:09.382995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.956 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.383926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.383949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.384951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.384973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.385919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.385941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.386876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.387855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.387877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.388862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.388884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.389032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.389054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.389150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.389171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.389255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.389277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.957 qpair failed and we were unable to recover it. 00:27:07.957 [2024-12-09 15:20:09.389451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.957 [2024-12-09 15:20:09.389473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.389570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.389591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.389710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.389732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.389829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.389850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.389945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.389967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.390874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.390895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.391949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.391971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.392953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.392974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.393866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.393888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.394939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.395077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.395112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.395376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.958 [2024-12-09 15:20:09.395412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.958 qpair failed and we were unable to recover it. 00:27:07.958 [2024-12-09 15:20:09.395600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.395632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.395820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.395852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.396917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.396939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.397890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.397911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.398832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.398854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.399977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.399999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.400838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.400859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.401882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.401904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.402067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.402089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.402172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.402193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.402383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.402545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.959 [2024-12-09 15:20:09.402567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.959 qpair failed and we were unable to recover it. 00:27:07.959 [2024-12-09 15:20:09.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.402742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.402858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.402880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.402994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.403913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.403994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.404904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.404999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.405916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.405938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.406873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.406895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.407795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.407817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.408885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.960 qpair failed and we were unable to recover it. 00:27:07.960 [2024-12-09 15:20:09.408988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.960 [2024-12-09 15:20:09.409009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.409944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.409966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.410057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.410077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.410174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.410195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.410324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.410366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.410558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.410591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.410855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.410887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.411975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.411996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.412923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.412944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.413915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.413937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.414839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.414993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.415887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.415908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.416008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.416030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.416114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.416135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.416242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.416265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.961 [2024-12-09 15:20:09.416360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.961 [2024-12-09 15:20:09.416382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.961 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.416540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.416561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.416648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.416670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.416769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.416791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.416947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.416968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.417062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.417084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.417271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.417293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.417436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.417506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.417643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.417678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.417855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.417888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.418932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.418953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.419930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.419952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.420925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.420946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.421941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.421973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.422882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.422904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.423026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.423200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.423352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.423545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.423728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.423984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.424836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.424857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.962 [2024-12-09 15:20:09.425673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.962 [2024-12-09 15:20:09.425695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.962 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.425785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.425811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.425981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.426931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.426953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.427977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.427998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.428878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.428983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.429888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.429909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.430885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.431884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.432961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.433342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.433515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.433700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.433823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.433986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.434896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.434917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.435160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.435183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.435289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.435311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.963 qpair failed and we were unable to recover it. 00:27:07.963 [2024-12-09 15:20:09.435515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.963 [2024-12-09 15:20:09.435537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.435645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.435666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.435756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.435777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.435947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.436159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.436330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.436505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.436627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.436811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.436993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.437887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.437908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.438891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.438980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.439107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.439289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.439533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.439705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.439920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.439941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.440119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.440263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.440463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.440686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.440863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.441030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.441209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.441240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.441478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.441500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.441720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.441742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.441914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.441935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.442963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.443117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.443139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.443369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.443392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.443660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.443682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.443790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.443811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.443976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.443997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.444243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.444266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.444419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.444440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.444613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.444636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.444806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.444828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.444939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.444960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.445924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.445945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.446094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.446116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.446319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.446422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.446443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.964 qpair failed and we were unable to recover it. 00:27:07.964 [2024-12-09 15:20:09.446685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.964 [2024-12-09 15:20:09.446707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.446868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.446889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.446986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.447168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.447363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.447547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.447735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.447870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.447892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.448064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.448087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.448305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.448328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.448495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.448518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.448703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.448725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.448900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.448923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.449852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.449873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.450888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.450910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.451891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.451912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.452008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.452029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.452137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.452158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.452406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.452429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.452583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.452771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.453946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.453967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.454065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.454273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.454295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.454524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.454721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.454742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.454891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.454912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.455086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.455108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.455341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.455367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.455559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.455580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.455735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.455756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.455961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.455983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.456181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.456202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.456306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.456328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.456425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.456447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.456687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.456708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.456895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.456917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.457013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.457034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.457252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.965 [2024-12-09 15:20:09.457275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.965 qpair failed and we were unable to recover it. 00:27:07.965 [2024-12-09 15:20:09.457438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.457460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.457575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.457597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.457760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.457782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.458012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.458083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.458273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.458310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.458489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.458723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.458754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.458926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.458957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.459951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.459973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.460193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.460214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.460333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.460355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.460512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.460533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.460700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.460723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.460976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.461196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.461225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.461341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.461363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.461603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.461624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.461723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.461745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.461908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.461930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.462870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.462891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.463078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.463113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.463310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.463351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.463535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.463569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.463715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.463912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.463943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.464158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.464189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.464373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.464405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.464667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.464698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.464871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.464904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.465068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.465094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.465259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.465282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.465396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.465418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.465636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.465658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.465882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.465908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.466915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.466937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.467086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.467108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.467278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.467301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.467518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.467540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.467700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.467721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.467989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.468011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.468172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.468194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.468436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.468470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.468709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.468741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.468864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.468895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.469022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.469054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.469157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.469188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.469405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.469670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.469695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.966 [2024-12-09 15:20:09.469788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.966 [2024-12-09 15:20:09.469809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.966 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.469909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.469930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.470036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.470058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.470275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.470297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.470454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.470476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.470649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.470839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.470861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.471870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.471892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.472936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.473138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.473387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.473519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.473668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.473814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.473983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.474209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.474424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.474606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.474796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.474902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.474923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.475101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.475274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.475296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.475528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.475549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.475716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.475739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.475954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.475975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.476231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.476255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.476473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.476495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.476645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.476667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.476888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.476910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.477861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.477883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.478049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.478072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.478261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.478537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.478569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.478761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.478793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.478980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.479012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.479197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.479235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.479434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.479465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.479695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.479720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.479884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.479906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.480036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.480236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.480454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.480623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.480983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.481014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.967 qpair failed and we were unable to recover it. 00:27:07.967 [2024-12-09 15:20:09.481236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.967 [2024-12-09 15:20:09.481269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.481459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.481490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.481607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.481638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.481834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.481866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.482068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.482090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.482306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.482329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.482498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.482520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.482715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.482746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.482853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.482885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.483077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.483109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.483349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.483382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.483565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.483596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.483849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.483881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.484063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.484097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.484214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.484254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.484390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.484593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.484863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.485051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.485084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.485272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.485304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.485562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.485841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.485873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.486082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.486113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.486261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.486294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.486424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.486456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.486699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.486730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.486953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.487213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.487255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.487499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.487530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.487712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.487744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.487874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.487905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.488066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.488314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.488337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.488449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.488470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.488690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.488711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.488876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.488897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.489143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.489269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.489303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.489502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.489533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.489653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.489685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.489955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.489987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.490855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.490877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.491950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.491972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.492092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.492114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.492351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.492374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.492521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.492543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.492637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.492659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.492851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.492872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.493878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.493899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.494173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.494205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.968 qpair failed and we were unable to recover it. 00:27:07.968 [2024-12-09 15:20:09.494386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.968 [2024-12-09 15:20:09.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.494540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.494571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.494710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.494745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.494932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.494964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.495175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.495207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.495329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.495354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.495600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.495622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.495812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.495834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.496939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.496961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.497043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.497064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.497215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.497258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.497362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.497384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.497535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.497556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.497794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.497816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.498026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.498402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.498640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.498836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.498999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.499241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.499418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.499598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.499771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.499907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.499941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.500112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.500143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.500330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.500481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.500512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.500689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.500720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.500851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.500882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.501143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.501175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.501377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.501410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.501599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.501631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.501748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.501779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.501954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.501986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.502237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.502271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.502529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.502554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.502772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.502794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.503064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.503201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.503355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.503540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.503738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.503980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.504012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.504138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.504375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.504408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.504550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.504582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.504849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.504881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.505054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.505086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.505279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.505311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.505558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.505591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.505860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.505894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.506960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.506991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.507171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.507192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.507362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.507384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.969 [2024-12-09 15:20:09.507536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.969 [2024-12-09 15:20:09.507576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.969 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.507746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.507777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.507971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.508338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.508532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.508717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.508916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.508938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.509129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.509160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.509297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.509330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.509453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.509485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.509664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.509696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.509873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.509904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.510956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.510993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.511233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.511471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.511502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.511650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.511682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.511867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.512119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.512150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.512333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.512366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.512556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.512587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.512839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.512871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.513151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.513181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.513381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.513414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.513600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.513630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.513743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.513768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.513968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.513989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.514155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.514177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.514386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.514409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.514557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.514579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.514770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.514791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.515895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.515917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.516134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.516155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.516258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.516280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.516439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.516461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.516696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.516961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.516982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.517156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.517177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.517332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.517354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.517531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.517562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.517751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.517781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.518046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.518077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.518261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.518283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.518393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.518414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.518638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.518660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.518832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.518854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.519055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.519205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.519375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.519602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.970 [2024-12-09 15:20:09.519753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.970 qpair failed and we were unable to recover it. 00:27:07.970 [2024-12-09 15:20:09.519938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.519970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.520163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.520194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.520306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.520339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.520527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.520558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.520797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.520829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.520949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.520981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.521261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.521295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.521486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.521518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.521763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.521803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.521987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.522952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.522972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.523825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.523847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.524065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.524087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.524260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.524283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.524468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.524702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.524733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.524960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.524992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.525107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.525139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.525399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.525421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.525667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.525689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.525797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.525819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.526060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.526091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.526279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.526312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.526497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.526529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.526745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.526995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.527198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.527239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.527430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.527462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.527594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.527631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.527811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.527843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.527984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.528014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.528254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.528287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.528467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.528498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.528681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.528712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.528883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.528915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.529155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.529176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.529400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.529422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.529541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.529562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.529665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.529686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.529860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.529881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.530961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.530992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.531261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.531295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.531416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.531447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.531660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.531692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.531863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.531894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.532078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.532109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.532302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.532337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.532591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.532621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.532823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.532855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.532973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.532997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.971 [2024-12-09 15:20:09.533151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.971 [2024-12-09 15:20:09.533175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.971 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.533269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.533557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.533579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.533769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.533790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.533954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.533978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.534885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.534907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.535068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.535089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.535309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.535332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.535488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.535509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.535621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.535643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.535819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.535841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.536062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.536084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.536192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.536214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.536395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.536417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.536533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.536555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.536785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.537849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.537870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.538023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.538044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.538234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.538267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.538441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.538473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.538660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.538690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.538884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.538917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.539134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.539368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.539573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.539724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.539878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.539978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.540194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.540521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.540699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.540892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.540914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.541146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.541177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.541313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.541346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.541520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.541551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.541673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.541705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.541945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.541978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.542097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.542128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.542318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.542352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.542593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.542625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.542750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.542781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.542920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.543903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.543924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.544914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.544946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.972 qpair failed and we were unable to recover it. 00:27:07.972 [2024-12-09 15:20:09.545206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.972 [2024-12-09 15:20:09.545248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.545384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.545406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.545658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.545679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.545839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.545864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.545980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.546961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.546983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.547972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.547995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.548263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.548298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.548451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.548661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.548693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.548932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.548964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.549171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.549193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.549395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.549418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.549520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.549541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.549694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.549715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.549900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.549923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.550927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.550949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.551234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.551266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.551388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.551419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.551545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.551576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.551723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.551755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.551943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.551965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.552808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.552830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.553002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.553024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.553197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.553241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.553365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.553514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.553546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.553810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.553840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.554110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.554327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.554449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.554564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.554766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.554983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.973 qpair failed and we were unable to recover it. 00:27:07.973 [2024-12-09 15:20:09.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.973 [2024-12-09 15:20:09.555872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.556906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.556927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.557873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.557894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.558913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.558934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.559890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.559911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.560881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.560902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.561878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.561900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.562879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.562991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.563183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.563547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.563805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.563915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.563938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.564933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.564955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.974 qpair failed and we were unable to recover it. 00:27:07.974 [2024-12-09 15:20:09.565773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.974 [2024-12-09 15:20:09.565797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.565946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.565968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.566963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.566985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.567908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.567930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.568938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.568959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.569917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.569943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.570834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.570855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.571900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.572133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.572204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.572353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.572389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.572639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.572663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.572767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.572789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.573952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.573974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.574863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.574884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.575035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.575056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.575207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.575237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.975 qpair failed and we were unable to recover it. 00:27:07.975 [2024-12-09 15:20:09.575339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.975 [2024-12-09 15:20:09.575361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.575520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.575541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.575637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.575658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.575758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.575779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.575874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.576008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.576169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.576190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.576306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.576342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.576542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.576574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.576708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.576739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.577022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.577053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.577241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.577275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.577525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.577556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.577750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.577774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.577874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.577896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.578891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.578994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.579108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.579387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.579527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.579638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.579890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.579911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.580903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.580925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.581152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.581343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.581415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.581625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.581660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.581891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.581916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.582908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.582930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.583076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.583097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.583207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.583349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.583371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.583595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.583616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.583842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.583863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.584041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.584156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.584361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.584552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.584817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.584987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.585929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.586174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.976 [2024-12-09 15:20:09.586199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.976 qpair failed and we were unable to recover it. 00:27:07.976 [2024-12-09 15:20:09.586307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.586330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.586451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.586473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.586716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.586900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.586922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.587961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.587982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.588973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.588994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.589306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.589496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.589598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.589834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.589988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.590160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.590352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.590523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.590730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.590921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.590944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.591937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.591959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.592899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.592920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.593081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.593103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.593262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.593284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.593552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.593736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.593758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.593999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.594130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.594382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.594572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.594755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.594948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.594970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.595156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.595178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.595306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.595328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.595537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.595558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.595727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.595753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.595908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.595930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.596114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.596137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.596306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.596571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.596592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.596835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.596856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.597863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.597885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.977 [2024-12-09 15:20:09.598002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.977 [2024-12-09 15:20:09.598024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.977 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.598200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.598229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.598324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.598346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.598510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.598531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.598685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.598707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.598882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.598904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.599152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.599173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.599390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.599413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.599639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.599661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.599823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.599844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.599951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.599972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.600122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.600143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.600244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.600267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.600425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.600448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.600662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.600684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.600787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.600809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.601887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.601908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.602145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.602167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.602329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.602352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.602540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.602561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.602741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.602763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.602914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.602936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.603975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.603997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.604874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.605967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.606909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.606931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.607890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.607982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.608003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.608085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.608107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.978 [2024-12-09 15:20:09.608265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.978 [2024-12-09 15:20:09.608288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.978 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.608462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.608653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.608675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.608797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.608819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.608933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.609083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.609105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.609349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.609372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.609537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.609558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.609726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.609748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.609911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.609932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.610888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.610910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.611074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.611095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.611213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.611245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.611415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.611436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.611651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.611673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.611898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.612094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.612116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.612299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.612481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.612506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.612647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.612891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.612913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.613075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.613258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.613280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.613445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.613466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.613714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.613848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.613869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.614908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.614929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.615029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.615050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.615201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.615236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.615458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.615480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.615675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.615875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.615896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.616916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.617883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.617905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.618844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.618876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.619141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.619172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.619389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.619603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.619635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.619906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.619938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.979 [2024-12-09 15:20:09.620072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.979 [2024-12-09 15:20:09.620112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.979 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.620270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.620293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.620471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.620492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.620660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.620682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.620782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.620803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.620897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.620919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.621835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.621857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.622828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.622999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.623252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.623421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.623524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.623712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.623835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.623856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.624924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.624955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.625162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.625385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.625517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.625644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.625745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.625987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.626008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.626208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.626238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.626418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.626440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.626617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.626648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.626890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.626921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.627179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.627212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.627361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.627383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.627480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.627502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.627741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.627762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.627876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.627898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.628941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.628963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.629062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.629250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.629273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.629432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.629460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.629680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.629712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.629927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.629958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.630259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.630293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.630422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.630454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.630626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.630657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.630760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.630792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.631967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.631989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.980 [2024-12-09 15:20:09.632111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.980 qpair failed and we were unable to recover it. 00:27:07.980 [2024-12-09 15:20:09.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.632355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.632571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.632590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.632845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.633107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.633137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.633320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.633363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.633608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.633631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.633795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.634055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.634088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.634198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.634226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.634337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.634360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.634583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.634616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.634788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.634822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.635041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.635469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.635660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.635849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.635997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.636031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.636271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.636492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.636659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.636692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.636807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.636840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.637060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.637093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.637293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.637327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.637502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.637534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.637732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.637766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.637883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.637916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.638213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.638495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.638533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.638663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.638698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.638816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.638850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.639100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.639133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.639401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.639435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.639647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.639680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.639853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.639886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.640084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.640358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.640392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.640523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.640769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.640802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.641266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.641297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.641393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.641421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.641665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.641690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.641868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.641901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.642100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.642133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.642259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.642294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.642464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.642496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.642688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.642723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.642855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.642890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.643032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.643066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.643196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.643241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.643496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.643528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.643715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.643749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.643927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.643961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.644153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.644176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.644366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.644401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.644667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.644882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.644916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.645045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.645079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.645281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.645306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.645459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.645499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.645691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.645725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.645989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.646212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.646242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.646412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.981 [2024-12-09 15:20:09.646446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.981 qpair failed and we were unable to recover it. 00:27:07.981 [2024-12-09 15:20:09.646628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.646661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.646894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.647163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.647196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.647476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.647510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.647718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.647751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.647882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.647917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.648030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.648064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.648264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.648300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.648443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.648476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.648660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.648693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.648805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.648839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.649023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.649055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.649183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.649206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.649463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.649497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.649616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.649649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.649907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.649941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.650233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.650268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.650511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.650584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.650738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.650774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.650964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.651000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.651188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.651230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.982 [2024-12-09 15:20:09.651358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.982 [2024-12-09 15:20:09.651390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.982 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.651578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.651610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.651799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.651832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.652115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.652147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.652404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.652439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.652618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.652651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.652926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.983 [2024-12-09 15:20:09.652960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.983 qpair failed and we were unable to recover it. 00:27:07.983 [2024-12-09 15:20:09.653077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.653110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.653281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.653314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.653503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.653545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.653731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.653764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.653942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.653975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.654095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.654128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.654369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.654634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.654672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.654941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.655255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.655389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.655422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.655608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.655641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.655776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.655995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.656937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.656958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.657111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.657150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.657353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.657388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.657656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.657689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.657951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.657984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.658193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.658235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.658492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.658526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.658733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.658838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.658870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.659111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.659145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.659274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.659314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.659565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.659598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.660084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.660117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.660418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.660704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.660726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.660844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.661038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.661060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.984 [2024-12-09 15:20:09.661213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.984 [2024-12-09 15:20:09.661245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.984 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.661340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.661384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.661524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.661557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.661812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.661845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.662913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.662936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.663174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.663198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.663427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.663452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.663605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.663628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.663819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.663853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.663976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.664009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.664201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.664259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:07.985 [2024-12-09 15:20:09.664521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.985 [2024-12-09 15:20:09.664544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:07.985 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.664790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.664813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.664981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.665105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.665289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.665556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.665745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.665960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.665999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.666211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.666354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.666478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.666669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.666870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.666978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.667165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.667304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.667543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.667690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.667891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.667925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.668969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.668989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.669252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.296 [2024-12-09 15:20:09.669273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.296 qpair failed and we were unable to recover it. 00:27:08.296 [2024-12-09 15:20:09.669461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.669481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.669629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.669649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.669843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.669864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.670890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.670993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.671805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.671979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.672084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.672261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.672505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.672821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.672842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.673969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.674948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.674970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.675079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.675268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.675571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.675745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.297 [2024-12-09 15:20:09.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.297 [2024-12-09 15:20:09.676009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.297 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.676230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.676253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.676421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.676443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.676686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.676708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.676905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.676927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.677953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.677975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.678144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.678167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.678354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.678508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.678530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.678732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.678929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.678951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.679909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.679931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.680963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.680986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.681102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.681344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.681368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.681538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.681561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.681712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.681735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.681851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.681874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.682040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.682063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.682154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.682177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.682361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.682384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.298 [2024-12-09 15:20:09.682475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.298 [2024-12-09 15:20:09.682498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.298 qpair failed and we were unable to recover it. 00:27:08.299 [2024-12-09 15:20:09.682688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-12-09 15:20:09.682713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-12-09 15:20:09.682829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-12-09 15:20:09.682852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-12-09 15:20:09.683005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-12-09 15:20:09.683028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-12-09 15:20:09.683179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-12-09 15:20:09.683201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.299 qpair failed and we were unable to recover it. 00:27:08.299 [2024-12-09 15:20:09.683320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.299 [2024-12-09 15:20:09.683343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.683580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.683603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.683762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.683785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.684906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.684928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.685095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.685118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.685340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.685537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.685559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.685800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.685824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.685988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.313 [2024-12-09 15:20:09.686864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.313 [2024-12-09 15:20:09.686886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.313 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.687894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.687917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.688950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.688973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.689241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.689265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.689385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.689496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.689519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.689740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.689763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.689916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.689938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.690761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.690784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.691002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.691025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.691137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.691159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.691259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.691283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.691439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.314 [2024-12-09 15:20:09.691462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.314 qpair failed and we were unable to recover it. 00:27:08.314 [2024-12-09 15:20:09.691580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.691603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.691828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.692919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.692942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.693106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.693128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.693286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.693309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.693526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.693548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.693717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.693739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.693838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.693865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.694143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.694166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.694346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.694369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.694479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.694502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.694750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.694772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.695841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.695864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.696872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.696895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.697122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.697144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.697310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.697334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.315 qpair failed and we were unable to recover it. 00:27:08.315 [2024-12-09 15:20:09.697433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.315 [2024-12-09 15:20:09.697456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.697647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.697670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.697891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.697913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.698132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.698259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.698392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.698583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.698760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.698984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.699955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.699977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.700854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.700876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.701952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.701974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.702957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.702980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.703203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.703233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.703330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.703353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.703456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.703478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.703645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.703668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.703829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.316 [2024-12-09 15:20:09.703852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.316 qpair failed and we were unable to recover it. 00:27:08.316 [2024-12-09 15:20:09.704007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.704201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.704347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.704456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.704655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.704904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.704927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.705905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.705926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.706907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.707868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.707891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.708067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.708181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.708438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.708614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.708879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.708982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.709086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.709375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.709566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.709687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.709873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.709896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.710059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.710082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.710197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.710228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.710323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.710354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.710452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.317 [2024-12-09 15:20:09.710475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.317 qpair failed and we were unable to recover it. 00:27:08.317 [2024-12-09 15:20:09.710568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.710591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.710710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.710733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.710921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.710944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.711881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.711903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.712893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.712916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.713911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.714954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.714977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.715170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.715193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.715355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.715379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.715531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.715553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.715729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.715751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.715931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.715954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.716045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.716068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.716173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.716195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.716508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.716580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.716826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.716897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.717110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.318 [2024-12-09 15:20:09.717147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.318 qpair failed and we were unable to recover it. 00:27:08.318 [2024-12-09 15:20:09.717391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.717427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.717695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.717728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.717976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.718897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.718919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.719162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.719184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.719354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.719378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.719543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.719565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.719804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.719826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.719922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.719948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.720183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.720206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.720401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.720424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.720597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.720620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.720734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.720757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.720950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.720973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.721089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.721112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.721263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.721286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.721503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.721526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.721697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.721720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.721826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.721849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.722018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.722041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.722215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.722346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.722369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.722653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.722690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.722881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.722914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.723914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.723936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.724176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.724199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.724311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.724333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.724491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.319 [2024-12-09 15:20:09.724514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.319 qpair failed and we were unable to recover it. 00:27:08.319 [2024-12-09 15:20:09.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.724628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.724791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.724814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.725868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.725892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.726931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.726954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.727193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.727239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.727399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.727424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.727612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.727648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.727771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.728873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.728992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.729024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.729209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.729253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.729431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.729453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.729553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.729596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.729701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.729733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.729974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.730202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.730424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.730561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.730757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.320 [2024-12-09 15:20:09.730958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.320 [2024-12-09 15:20:09.730990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.320 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.731165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.731386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.731419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.731589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.731621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.731813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.731846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.732112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.732144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.732268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.732303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.732476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.732508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.732632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.732664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.732861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.732899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.733157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.733190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.733342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.733534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.733557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.733676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.733709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.733947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.733979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.734172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.734205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.734407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.734557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.734590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.734718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.734750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.734991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.735210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.735453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.735599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.735750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.735913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.735936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.736018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.736040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.736203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.736235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.736436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.736468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.736659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.736692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.736881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.736914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.737965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.737998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.738184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.738233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.738361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.738395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.738652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.738685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.321 [2024-12-09 15:20:09.738929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.321 [2024-12-09 15:20:09.738952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.321 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.739111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.739133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.739380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.739404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.739627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.739660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.739842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.739877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.739984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.740207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.740370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.740523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.740731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.740893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.740927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.741105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.741137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.741258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.741292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.741555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.741665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.741776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.741796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.742866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.742888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.743067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.743089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.743195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.743236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.743410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.743449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.743714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.743746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.743931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.743954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.744927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.744959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.745205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.745251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.745371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.745404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.745576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.745609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.745817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.745990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.746013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.746188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.322 [2024-12-09 15:20:09.746211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.322 qpair failed and we were unable to recover it. 00:27:08.322 [2024-12-09 15:20:09.746464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.746488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.746576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.746619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.746741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.746773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.746971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.747143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.747360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.747569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.747736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.747902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.747925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.748087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.748110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.748368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.748451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.748472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.748573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.748595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.748763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.748796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.749044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.749075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.749201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.749263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.749508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.749539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.749752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.749785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.749979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.750142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.750531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.750682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.750898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.750931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.751035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.751068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.751177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.751209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.751404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.751477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.751733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.751944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.751977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.752176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.752209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.752441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.752477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.752720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.752753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.752879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.752905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.753066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.753089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.753258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.753282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.753451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.753474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.753655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.753687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.753863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.753896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.323 [2024-12-09 15:20:09.754105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.323 [2024-12-09 15:20:09.754138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.323 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.754336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.754370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.754578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.754611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.754753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.754776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.754878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.754900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.755952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.756142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.756173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.756371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.756406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.756513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.756545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.756737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.756770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.757012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.757046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.757165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.757197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.757423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.757458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.757649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.757681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.757796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.758916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.758939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.759988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.760009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.760197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.760240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.760430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.760463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.760598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.324 [2024-12-09 15:20:09.760631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.324 qpair failed and we were unable to recover it. 00:27:08.324 [2024-12-09 15:20:09.760892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.760915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.761942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.761980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.762151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.762684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.762839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.763000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.763256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.763290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.763404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.763437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.763633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.763665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.763780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.763804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.763979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.764011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.764268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.764302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.764487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.764519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.764766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.764788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.765940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.765973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.766182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.766407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.766441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.766733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.766767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.766971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.767188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.767366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.767524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.767732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.767942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.767976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.768154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.768186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.768406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.768439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.768736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.325 [2024-12-09 15:20:09.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.325 qpair failed and we were unable to recover it. 00:27:08.325 [2024-12-09 15:20:09.768940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.768962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.769142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.769175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.769395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.769430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.769540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.769573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.769693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.769716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.769886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.769910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.770082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.770114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.770419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.770663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.770866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.770890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.771869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.771892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.772007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.772040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.772155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.772188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.772488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.772713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.772751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.773040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.773073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.773329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.773382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.773563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.773596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.773732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.773889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.773922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.774950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.774986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.775164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.775198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.775339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.775372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.775482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.775514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.775636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.775680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.775834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.775857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.776019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.776049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.776227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.776251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.326 [2024-12-09 15:20:09.776475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.326 [2024-12-09 15:20:09.776507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.326 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.776745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.776779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.776972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.777004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.777196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.777249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.777367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.777400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.777642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.777675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.777789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.777820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.778001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.778033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.778164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.778196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.778413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.778445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.778620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.778656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.778835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.778867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.779964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.779986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.780155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.780353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.780377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.780539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.780752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.780786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.781055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.781087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.781228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.781262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.781388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.781421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.781592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.781615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.781806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.781829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.782078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.782101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.782326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.782350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.782503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.782529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.782702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.782725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.782931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.782954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.783839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.327 [2024-12-09 15:20:09.783862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.327 qpair failed and we were unable to recover it. 00:27:08.327 [2024-12-09 15:20:09.784025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.784215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.784411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.784534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.784714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.784833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.784855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.785968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.785991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.786071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.786093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.786196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.786228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.786476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.786575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.786762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.786785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.787988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.788892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.788994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.789962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.789986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.790141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.790164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.790278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.328 [2024-12-09 15:20:09.790302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.328 qpair failed and we were unable to recover it. 00:27:08.328 [2024-12-09 15:20:09.790461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.790487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.790582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.790605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.790825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.790848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.790949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.790973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.791879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.791900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.792183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.792206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.792397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.792609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.792631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.792713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.792734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.792983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.793138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.793161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.793341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.793365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.793465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.793487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.793729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.793751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.793958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.793981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.794160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.794531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.794725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.794845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.794994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.795182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.795315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.795440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.795664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.795802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.795825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.796004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.796028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.329 qpair failed and we were unable to recover it. 00:27:08.329 [2024-12-09 15:20:09.796113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.329 [2024-12-09 15:20:09.796134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.796298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.796505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.796539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.796706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.796737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.796911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.796941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.797179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.797209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.797405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.797436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.797538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.797564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.797653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.797676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.797897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.798886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.798992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.799893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.800886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.800909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.801064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.801276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.801300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.801453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.801475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.801720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.801742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.801844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.801886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.802119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.330 [2024-12-09 15:20:09.802150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.330 qpair failed and we were unable to recover it. 00:27:08.330 [2024-12-09 15:20:09.802392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.802427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.802611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.802634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.802761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.802792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.802923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.802957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.803059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.803091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.803234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.803268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.803374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.803406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.803515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.803548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.803758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.803791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.804070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.804093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.804190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.804213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.804404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.804437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.804612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.804643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.804840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.804873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.805914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.805936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.806806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.806828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.807970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.807993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.808145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.808167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.808346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.808370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.808591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.808616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.808842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.808864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.809053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.809182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.331 [2024-12-09 15:20:09.809204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.331 qpair failed and we were unable to recover it. 00:27:08.331 [2024-12-09 15:20:09.809313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.809336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.809433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.809455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.809609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.809632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.809788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.809811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.809904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.809928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.810053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.810316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.810455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.810644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.810824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.810978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.811893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.811994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.812100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.812277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.812476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.812599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.812847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.812869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.813962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.814902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.814924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.815028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.815050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.815261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.815285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.815457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.815642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.332 [2024-12-09 15:20:09.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.332 qpair failed and we were unable to recover it. 00:27:08.332 [2024-12-09 15:20:09.815826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.815849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.816962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.816984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.817976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.817998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.818966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.818990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.819883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.819906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.820129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.820152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.820250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.820274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.820439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.820462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.820679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.820705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.820928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.820951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.821046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.821069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.333 qpair failed and we were unable to recover it. 00:27:08.333 [2024-12-09 15:20:09.821232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.333 [2024-12-09 15:20:09.821255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.821412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.821527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.821550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.821719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.821742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.821842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.821865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.821962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.821985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.822134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.822155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.822397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.822420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.822596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.822619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.822778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.822800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.822973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.822994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.823215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.823245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.823357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.823380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.823501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.823527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.823763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.823786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.823871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.823892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.824068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.824337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.824444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.824712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.824890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.824987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.825959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.825981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.826171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.826193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.826304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.826328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.826580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.826736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.826758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.826863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.826885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.827067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.827252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.827438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.827616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.827732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.827969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.334 [2024-12-09 15:20:09.828041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.334 qpair failed and we were unable to recover it. 00:27:08.334 [2024-12-09 15:20:09.828289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.828330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.828526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.828671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.828809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.828843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.829920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.829942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.830054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.830078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.830328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.830353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.830546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.830776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.830799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.830901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.830924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.831950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.831972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.832114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.832137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.832377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.832401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.832548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.832571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.832834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.832857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.833959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.833982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.834095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.834118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.834276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.834299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.834529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.834551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.834790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.834813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.834976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.834999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.835106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.835129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.835289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.835312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.835533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.335 [2024-12-09 15:20:09.835556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.335 qpair failed and we were unable to recover it. 00:27:08.335 [2024-12-09 15:20:09.835775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.835797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.835971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.835994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.836899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.836984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.837109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.837282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.837487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.837736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.837867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.837890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.838028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.838148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.838401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.838594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.838777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.838966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.839000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.839249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.839284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.839524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.839557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.839679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.839704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.839875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.839899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.840926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.840949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.841082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.841326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.841503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.841700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.841818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.841988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.842010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.842091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.842112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.842296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.842320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.842542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.336 [2024-12-09 15:20:09.842565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.336 qpair failed and we were unable to recover it. 00:27:08.336 [2024-12-09 15:20:09.842736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.842759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.842908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.843900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.844034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.844056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.844205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.844334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.844357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.844606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.844629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.844873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.844896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.845936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.845958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.846226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.846249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.846423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.846446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.846536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.846559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.846804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.846827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.847844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.847870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.848977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.848999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.849152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.849175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.849396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.849574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.337 [2024-12-09 15:20:09.849597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.337 qpair failed and we were unable to recover it. 00:27:08.337 [2024-12-09 15:20:09.849840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.849863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.850864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.850886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.851080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.851230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.851425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.851604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.851876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.851993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.852911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.852999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.853205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.853483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.853626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.853747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.853884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.853906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.854060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.854083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.854244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.854370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.854392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.854578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.854601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.854774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.854797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.855012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.855035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.855209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.855247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.855481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.855504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.855665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.855688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.855854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.855877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.856037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.338 [2024-12-09 15:20:09.856060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.338 qpair failed and we were unable to recover it. 00:27:08.338 [2024-12-09 15:20:09.856150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.856172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.856344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.856368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.856635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.856657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.856833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.856930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.856951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.857209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.857388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.857412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.857507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.857528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.857681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.857703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.857932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.857956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.858959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.858981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.859230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.859254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.859372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.859394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.859576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.859599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.859821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.859844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.860971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.860993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.861177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.861276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.861300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.861403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.861425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.861520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.861542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.861712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.861734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.862930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.862953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.863074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.863097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.863260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-12-09 15:20:09.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-12-09 15:20:09.863440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.863463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.863630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.863652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.863831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.863855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.864836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.864859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.865959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.865982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.866938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.866961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.867969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.867993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.868956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.869059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.869083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.869168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.869188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.869414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.869438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.869660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.869683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.869946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-12-09 15:20:09.869969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-12-09 15:20:09.870193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.870326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.870519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.870636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.870827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.870982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.871146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.871169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.871369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.871394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.871571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.871816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.871839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.871950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.871974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.872098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.872216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.872499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.872691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.872883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.872981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.873004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.873102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.873124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.873226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.873251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.873546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.873569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.873803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.873826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.874839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.874861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.875017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.875039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.875202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.875245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.875432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.875466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.875577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.875609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.875805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.875838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.876023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.876056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.876164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.876197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.876474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.876506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.876688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.876720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-12-09 15:20:09.876914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-12-09 15:20:09.876948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.877913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.877939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.878118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.878141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.878349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.878515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.878538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.878706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.878740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.878926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.878959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.879088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.879120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.879367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.879645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.879677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.879864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.879896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.880070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.880295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.880319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.880509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.880533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.880782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.880805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.880993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.881107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.881129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.881250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.881274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.881496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.881529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.881664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.881698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.881903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.881945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.882203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.882233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.882451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.882474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.882582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.882605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.882852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.882874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.882979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.883081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.883108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.883326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-12-09 15:20:09.883350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-12-09 15:20:09.883448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.883471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.883693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.883716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.883871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.883893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.884117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.884405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.884439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.884633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.884666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.884847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.884870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.884983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.885118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.885360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.885570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.885714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.885947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.885979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.886184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.886207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.886316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.886422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.886444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.886663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.886686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.886865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.886897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.887134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.887374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.887409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.887531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.887564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.887739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.887772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.887953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.887987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.888155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.888189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.888495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.888786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.888833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.889013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.889048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.889319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.889372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.889564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.889596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.889802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.889835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.890076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.890112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.890332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.890366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.890501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.890533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.890719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.890751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.890879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.890912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.891047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.891083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.891170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.891191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-12-09 15:20:09.891277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-12-09 15:20:09.891299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.891488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.891511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.891615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.891648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.891841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.891873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.892107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.892140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.892326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.892350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.892434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.892456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.892729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.892771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.892898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.892930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.893102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.893134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.893246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.893280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.893455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.893488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.893761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.893795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.894055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.894078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.894249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.894273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.894384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.894407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.894589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.894621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.894814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.894847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.895048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.895081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.895283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.895440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.895473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.895650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.895682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.895872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.895903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.896143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.896165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.896348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.896371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.896468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.896491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.896677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.896700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.896824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.897017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.897050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.897291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.897328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.897510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.897543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.897663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.897696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.897827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.897853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.898079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.898112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.898329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.898364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.898478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.898511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.898622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-12-09 15:20:09.898664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-12-09 15:20:09.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.898849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.898960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.898993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.899166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.899199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.899394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.899426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.899665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.899697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.899915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.900116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.900247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.900272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.900437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.900461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.900697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.900721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.900944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.900967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.901087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.901272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.901496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.901812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.901990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.902022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.902230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.902255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.902418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.902451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.902751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.902788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.902930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.902964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.903180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.903512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.903635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.903827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.903991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.904178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.904376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.904516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.904685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.904890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.904924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.905099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.905122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.905298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.905344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.905529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.905562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.905739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.905771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-12-09 15:20:09.905960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-12-09 15:20:09.905993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.906262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.906297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.906416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.906589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.906801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.906834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.906952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.906985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.907224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.907248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.907336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.907358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.907512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.907535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.907638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.907661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.907779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.907810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.908954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.908987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.909254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.909288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.909502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.909748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.909781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.909884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.909905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.910054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.910076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.910261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.910285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.910451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.910484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.910633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.910666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.910907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.910948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.911038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.911060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.911209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.911241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.911434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.911457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.911681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.911704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.911800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.912903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.912926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.913079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.913116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.913363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-12-09 15:20:09.913553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-12-09 15:20:09.913586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.913763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.913796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.913914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.914170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.914202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.914457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.914491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.914625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.914657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.914774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.914808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.914988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.915022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.915139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.915171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.915402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.915426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.915590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.915613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.915768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.915799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.915998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.916030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.916149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.916182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.916463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.916496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.916617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.916649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.916834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.916867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.917062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.917094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.917310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.917416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.917450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.917632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.917665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.917850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.917873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.918119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.918152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.918379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.918497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.918530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.918727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.918766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.919071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.919169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.919192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.919375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.919408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.919602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.919635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.919809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.919842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.920030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.920052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.920159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.920192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.920401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.920434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.920687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-12-09 15:20:09.920720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-12-09 15:20:09.920907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.920930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.921081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.921104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.921191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.921212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.921381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.921405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.921565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.921826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.921858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.922078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.922111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.922303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.922338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.922517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.922702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.922963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.922996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.923997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.924144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.924166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.924327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.924362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.924638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.924672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.924800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.924831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.925916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.925938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.926107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.926129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.926250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.926284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.926520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.926553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.926797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.926830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.926964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.926996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.927172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.927194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.927387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.927421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.927605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.927638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.927823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.927856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.927985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.928008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.928260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.928294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.928476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-12-09 15:20:09.928509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-12-09 15:20:09.928699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.928732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.928905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.928938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.929951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.929972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.930067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.930091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.930267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.930301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.930418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.930451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.930635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.930668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.930860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.930892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.931155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.931188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.931379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.931539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.931572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.931702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.931922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.931955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.932225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.932259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.932455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.932494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.932686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.932719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.932855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.932887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.933153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.933177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.933286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.933309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.933493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.933527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.933716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.933749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.934005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.934036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.934209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.934241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.934341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.934625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.934657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.934842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.934875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.935942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.935975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.936091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.936124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.936310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-12-09 15:20:09.936344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-12-09 15:20:09.936517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.936549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.936735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.936767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.936975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.937099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.937121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.937368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.937402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.937556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.937742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.937775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.937973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.937999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.938173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.938196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.938388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.938420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.938545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.938578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.938829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.938862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.939919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.939942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.940064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.940087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.940202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.940429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.940462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.940651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.940685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.940949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.940980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.941105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.941139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.941345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.941380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.941579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.941612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.941834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.941868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.942836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.942866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.943056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.943089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.943208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.943262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.943460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.943493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.943676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.943709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.943896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.943929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.944191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.944232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.944422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.944455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-12-09 15:20:09.944625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-12-09 15:20:09.944658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.944845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.944877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.945073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.945105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.945341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.945365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.945532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.945556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.945655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.945676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.946043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.946066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.946226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.946272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.946457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.946490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.946617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.946649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.946888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.946920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.947178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.947211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.947493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.947525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.947656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.947688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.947852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.947981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.948014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.948201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.948231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.948395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.948428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.948620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.948653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.948861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.948893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.949157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.949357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.949381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.949557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.949590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.949823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.949856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.950047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.950080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.950294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.950431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.950464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.950724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.950757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.950934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.950957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.951042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.951064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.951243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.951267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.951375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.951399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.951566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.951608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.951797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.951830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.952006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.952045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.952169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.952192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.952304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.952325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-12-09 15:20:09.952504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-12-09 15:20:09.952526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.952612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.952633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.952784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.952807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.953042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.953074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.953245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.953279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.953462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.953495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.953671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.953704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.953895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.953928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.954190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.954233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.954466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.954498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.954762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.954795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.954961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.954984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.955138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.955161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.955262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.955284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.955571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.955594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.955748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.955771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.956877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.956909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.957958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.957990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.958197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.958439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.958575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.958608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.958798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.958831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.959048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.959081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.959197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.959242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.959424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.959457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.959648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.959681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.959795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.960102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.960135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.960254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.960278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.960494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.960517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.960668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-12-09 15:20:09.960691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-12-09 15:20:09.960855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.960877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.961056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.961090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.961197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.961240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.961433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.961466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.961730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.961764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.961970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.962185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.962441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.962618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.962746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.962921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.962943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.963856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.963889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.964091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.964319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.964353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.964543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.964575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.964750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.964784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.964968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.964990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.965100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.965319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.965343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.965458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.965481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.965701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.965724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.965992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.966182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.966216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.966347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.966380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.966501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.966533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.966719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.966752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.966935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.966967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.967149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.967181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.967379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.967412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.967620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.967746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.967778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.967952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.967984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.968187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.968230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.968352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.968384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.968574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.968607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.968718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.968751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.968999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.969032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.969149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.969181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.969408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.969432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-12-09 15:20:09.969529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-12-09 15:20:09.969551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.969718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.969741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.969895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.969917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.970107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.970139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.970253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.970286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.970401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.970434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.970563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.970595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.970782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.970815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.971104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.971127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.971441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.971679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.971711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.971895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.971927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.972162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.972195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.972462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.972486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.972701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.972723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.972853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.973072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.973094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.973203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.973231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.973406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.973439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.973631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.973664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.973846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.973880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.974233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.974413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.974822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.974998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.975030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.975243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.975276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.975396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.975428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.975622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.975656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.975843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.975875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.976118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.976149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.976417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.976737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.976776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.976948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.976982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.977105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.977326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.977349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.977542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.977574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.977760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.977793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.977933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.977964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.978202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.978418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.978449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.978565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.978596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.978805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.978837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.979016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.979049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.979169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-12-09 15:20:09.979200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-12-09 15:20:09.979363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.979573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.979606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.979789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.979822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.980039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.980241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.980275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.980408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.980440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.980692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.980906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.980939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.981077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.981101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.981359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.981391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.981561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.981594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.981850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.982882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.982904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.983144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.983168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.983333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.983356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.983626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.983650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.983762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.983785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.983957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.983990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.984110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.984142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.984366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.984399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.984515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.984547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.984738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.984772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.984967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.984998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.985206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.985258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.985428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.985451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.985620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.985654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.985837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.985870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.986062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.986094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.986292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.986326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.986527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.986549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.986671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.986694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.986861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.986894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.987018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.987051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.987300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.987333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.987566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.987589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.987688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-12-09 15:20:09.987710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-12-09 15:20:09.987901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.987933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.988123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.988156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.988447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.988480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.988614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.988647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.988773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.988805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.989177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.989327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.989591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.989748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.989916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.989948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.990190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.990246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.990420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.990452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.990628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.990661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.990937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.990970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.991142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.991368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.991479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.991582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.991819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.991981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.992793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.992977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.993199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.993420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.993562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.993718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.993857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.993886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.995111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-12-09 15:20:09.995320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-12-09 15:20:09.995344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.995429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.995450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.995560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.995587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.995706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.995728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.995824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.995846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.995930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.995952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.996194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.996235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.996420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.996453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.996646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.996678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.996850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.996882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.996997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.997020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.997272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.997295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.997471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.997494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.997664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.997697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.997871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.997903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.998161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.998194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.998405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.998439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.998627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.998659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.998786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.998818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.998949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.998980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.999185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.999208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.999434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.999457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.999623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.999646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:09.999766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:09.999798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.000041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.000074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.000245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.000280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.000457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.000489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.000608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.000641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.000842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.000888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.001101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.001318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.001356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.001530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.001553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.001671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.001694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.001920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.001942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.002875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.002979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.003254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.003384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.003608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.003810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.003941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.003964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.004183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.004205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.004377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.004400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.004499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.004521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.004630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.004652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.004899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.004931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-12-09 15:20:10.005076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-12-09 15:20:10.005117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.005406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.005437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.005641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.005671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.005801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.005831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.006023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.006056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.006239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.006275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.006427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.006463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.006613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.006662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.006830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.006882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.007309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.007343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.007513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.007838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.007883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.008079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.008125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.008353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.008398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.008601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.008641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.008784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.008893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.008916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.009839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.009996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.010810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.010979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.011915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.011937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.012181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.012372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.012497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.012616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.012818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.012986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.013179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.013299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.013498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.013641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-12-09 15:20:10.013749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-12-09 15:20:10.013772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.013885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.013908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.014005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.014028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.014202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.014235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.014446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.014469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.014634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.014656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.014843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.014866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.015870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.015977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.016939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.016962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.017125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.017148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.017331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.017355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.017611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.017633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.017787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.017813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.017903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.017925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.018917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.018939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.019909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.019932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.020975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-12-09 15:20:10.020998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-12-09 15:20:10.021150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.021268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.021454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.021641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.021813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.021931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.021953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.022125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.022308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.022505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.022697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.022818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.022999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.023456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.023576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.023767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.023960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.023983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.024875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.024991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.025179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.025388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.025628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.025800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.025922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.025945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.026893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.027973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.027996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.028150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.028173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.028290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.028314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.028476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.028499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.028742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.028765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.028861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.028885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.029058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-12-09 15:20:10.029081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-12-09 15:20:10.029189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.029213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.029368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.029392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.029558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.029581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.029688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.029711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.029889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.029912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.029996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.030914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.031873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.032861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.032983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.033170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.033389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.033525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.033654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.033851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.033878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.034797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.034990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.035104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.035293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.035561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.035687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.035877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.035899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-12-09 15:20:10.036735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-12-09 15:20:10.036908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.036930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.037100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.037123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.037277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.037300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.037391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.037415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.037663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.037905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.037932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.038915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.038937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.039925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.039948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.040919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.040942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.041923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.041958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.042062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.042090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.042262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.042287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.042480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.042503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.042704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.042949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.042976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.043140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.043163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.043325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.043349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.043512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.043534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.043697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.043718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.043833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.044079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.044102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.044268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.044293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.044448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.044470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.044563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.044585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.044828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.044900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.045040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.045077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.045278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-12-09 15:20:10.045315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-12-09 15:20:10.045560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.045593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.045731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.045764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.045957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.046164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.046197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.046331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.046365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.046497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.046531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.046655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.046687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.046810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.046843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.047037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.047070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.047332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.047366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.047558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.047601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.047727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.047759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.048046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.048076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.048260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.048287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.048533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.048755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.048778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.048949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.048972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.049169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.049192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.049373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.049398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.049494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.049517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.049609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.049631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.049802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.049827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.050915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.050941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.051862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.051892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.052054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.052078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.052249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.052274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.052448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.052476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.052627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.052650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.052760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.052783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.053017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.053044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.053208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.053243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.053410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.053433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.053585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.053608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-12-09 15:20:10.053700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-12-09 15:20:10.053721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.053824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.053846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.054034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.054057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.054288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.054469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.054492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.054589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.054610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.054756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.054778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.055039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.055074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.055324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.055358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.055486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.055520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.055727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.055760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.055952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.055984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.056234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.056268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.056456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.056489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.056716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.056748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.056940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.056973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.057141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.057355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.057388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.057564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.057596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.057768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.057800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.057996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.058035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.058235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.058268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.058529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.058559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.058783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.058806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.059051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.059187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.059486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.059634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.059819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.059982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.060004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.060100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.060122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.060292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.060317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.060559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.060590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.060697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.060720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.061889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.061913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.062137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.062159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.062322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.062346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.062496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.062519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.062682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.062704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.062848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.063094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.063116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.063272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.063295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.063463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.063490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.063646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.063669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-12-09 15:20:10.063932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-12-09 15:20:10.063957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.064891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.064914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.065076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.065098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.065182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.065203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.065414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.065437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.065526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.065547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-12-09 15:20:10.065725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-12-09 15:20:10.065747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.065939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.066151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.066184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.066363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.066396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.066515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.066547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.066753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.066786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.066977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.067287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.067414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.067787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.067928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.068077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.068100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.068276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.068307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.068497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.068528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.068655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.068685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.068804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.068837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.648 [2024-12-09 15:20:10.069016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.648 [2024-12-09 15:20:10.069046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.648 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.069172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.069200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.069365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.069401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.069532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.069561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.069749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.069788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.069894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.069926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.070110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.070140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.070364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.070406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.070540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.070570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.070750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.070794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.071011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.071253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.071290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.071433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.071466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.071634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.071667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.071827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.071983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.072884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.072994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.073841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.649 [2024-12-09 15:20:10.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.649 qpair failed and we were unable to recover it. 00:27:08.649 [2024-12-09 15:20:10.074088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.074269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.074383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.074577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.074769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.074908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.074931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.075821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.075853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.076044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.076078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.076344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.076378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.076569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.076602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.076735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.076767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.077932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.078921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.078944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.079029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.079050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.079259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.079282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.079374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.079395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.079590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.650 [2024-12-09 15:20:10.079612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.650 qpair failed and we were unable to recover it. 00:27:08.650 [2024-12-09 15:20:10.079808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.079830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.079921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.079942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.080048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.080071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.080252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.080275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.080509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.080546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.080680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.080712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.080899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.080933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.081107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.081139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.081388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.081422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.081678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.081711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.081919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.082946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.082969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.083116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.083383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.083499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.083689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.083990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.084124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.084304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.084428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.084623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.084877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.084901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.085087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.085110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.085351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.085376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.085532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.085555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.085740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.085775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.085907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.086134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.086167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.086367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.086401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.086592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.086625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.086816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.086849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.086987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.087013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.651 [2024-12-09 15:20:10.087209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.651 [2024-12-09 15:20:10.087242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.651 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.087342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.087364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.087461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.087500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.087669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.087692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.087859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.087882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.088888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.088994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.089204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.089395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.089588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.089767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.089892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.089915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.090950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.090973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.091154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.091176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.091424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.091447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.091633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.091655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.091841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.092940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.092963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.093159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.093577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.093776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.093893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.093992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.652 [2024-12-09 15:20:10.094015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.652 qpair failed and we were unable to recover it. 00:27:08.652 [2024-12-09 15:20:10.094109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.094130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.094352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.094376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.094539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.094561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.094793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.094816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.095857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.095880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.096037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.096226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.096250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.096477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.096499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.096756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.096967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.096989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.097161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.097183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.097296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.097319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.097471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.097494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.097651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.097674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.097856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.098067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.098252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.098443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.098559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.098750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.098989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.099013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.099172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.099195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.099509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.099581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.099795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.099831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.100024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.100057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.100175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.100207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.100328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.100360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.100654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.100686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.100854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.100879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.653 qpair failed and we were unable to recover it. 00:27:08.653 [2024-12-09 15:20:10.101109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.653 [2024-12-09 15:20:10.101135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.101297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.101320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.101487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.101509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.101608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.101630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.101791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.102890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.102912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.103160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.103183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.103411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.103433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.103618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.103641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.103837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.103859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.104812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.104988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.105109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.105286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.105467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.105639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.105827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.105849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.106944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.106966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.107940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.107979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.108090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.654 [2024-12-09 15:20:10.108112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.654 qpair failed and we were unable to recover it. 00:27:08.654 [2024-12-09 15:20:10.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.108248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.108361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.108385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.108474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.108494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.108643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.108665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.108816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.108839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.109988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.110881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.110904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.111900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.111986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.112806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.112826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.113045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.113067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.113238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.113261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.113513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.113536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.113643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.113665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.113835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.113857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.114026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.114048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.114286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.114309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.655 qpair failed and we were unable to recover it. 00:27:08.655 [2024-12-09 15:20:10.114409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.655 [2024-12-09 15:20:10.114432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.114542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.114564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.114670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.114693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.114878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.114901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.115992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.116909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.116994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.117021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.117266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.117289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.117530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.117553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.117720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.117744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.117914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.117936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.118958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.118981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.119098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.119275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.119452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.119582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.119769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.119983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.120107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.120284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.120501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.120716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.120831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.120854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.121070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.656 [2024-12-09 15:20:10.121093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.656 qpair failed and we were unable to recover it. 00:27:08.656 [2024-12-09 15:20:10.121202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.121233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.121402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.121424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.121591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.121614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.121716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.121739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.121898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.121924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.122146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.122169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.122299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.122466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.122488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.122649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.122671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.122771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.122794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.123049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.123072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.123306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.123331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.123591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.123613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.123897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.123920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.124853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.124874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.125831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.125854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.126865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.126887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.127082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.127161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.127183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.657 [2024-12-09 15:20:10.127318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.657 [2024-12-09 15:20:10.127342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.657 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.127457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.127479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.127627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.127650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.127799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.127822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.127981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.128093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.128273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.128416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.128591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.128778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.128800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.129060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.129132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.129349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.129388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.129573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.129608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.129740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.129773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.129993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.130027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.130231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.130266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.130525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.130550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.130642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.130665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.130822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.130844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.130996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.131808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.132840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.132862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.133050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.133165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.133296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.133566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.133773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.133970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.134207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.134250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.658 [2024-12-09 15:20:10.134517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.658 [2024-12-09 15:20:10.134550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.658 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.134742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.134868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.135188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.135387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.135703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.135908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.135929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.136027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.136049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.136161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.136183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.136428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.136461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.136715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.136738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.136824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.136847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.137947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.138127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.138149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.138336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.138439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.138463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.138646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.138668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.138788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.138815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.139878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.139901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.140121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.140143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.140306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.140330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.140427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.140449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.140691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.140714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.140908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.140931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.141113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.141136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.141311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.141335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.141446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.141469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.659 qpair failed and we were unable to recover it. 00:27:08.659 [2024-12-09 15:20:10.141629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.659 [2024-12-09 15:20:10.141651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.141761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.141784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.141864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.141884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.142146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.142169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.142358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.142382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.142574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.142834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.143938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.143965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.144206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.144237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.144345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.144368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.144538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.144561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.144725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.144748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.144863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.144886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.145939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.145962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.146852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.146875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.147030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.147052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.147269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.147293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.147443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.147466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.147645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.147668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.147762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.147785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.148005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.148029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.148195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.148224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.660 [2024-12-09 15:20:10.148446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.660 [2024-12-09 15:20:10.148472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.660 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.148644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.148666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.148856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.148878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.149158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.149273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.149462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.149636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.149819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.149984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.150182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.150386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.150538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.150739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.150870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.150892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.151058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.151081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.151250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.151274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.151496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.151518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.151750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.151772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.151868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.151891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.152935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.153865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.153888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.154923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.154944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.155126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.155147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.155299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.155322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.155539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.661 [2024-12-09 15:20:10.155562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.661 qpair failed and we were unable to recover it. 00:27:08.661 [2024-12-09 15:20:10.155675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.155698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.155896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.155918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.156919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.156941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.157954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.157975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.158935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.158957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.159977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.159998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.160112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.160134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.160262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.160447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.160470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.160573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.160596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.160818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.160840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.161022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.161045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.161210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.161240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.161335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.662 [2024-12-09 15:20:10.161356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.662 qpair failed and we were unable to recover it. 00:27:08.662 [2024-12-09 15:20:10.161507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.161530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.161705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.161728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.161816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.161837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.162922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.162945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.163039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.163060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.163239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.163264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.163485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.163508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.163732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.163754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.163913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.163936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.164937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.164959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.165941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.165961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.166113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.166347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.166370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.166540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.166562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.166743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.166765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.166918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.166940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.167898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.168063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.168085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.168256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.168280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.168386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.663 [2024-12-09 15:20:10.168408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.663 qpair failed and we were unable to recover it. 00:27:08.663 [2024-12-09 15:20:10.168503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.168526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.168635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.168657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.168874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.169147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.169328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.169443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.169657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.169842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.169996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.170102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.170228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.170470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.170594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.170864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.171042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.171064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.171287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.171311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.171527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.171549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.171706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.171728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.171885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.171907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.172130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.172153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.172256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.172279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.172529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.172553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.172705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.172727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.172880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.172902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.173034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.173249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.173513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.173883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.173990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.174966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.174989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.175156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.175178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.175338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.664 [2024-12-09 15:20:10.175360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.664 qpair failed and we were unable to recover it. 00:27:08.664 [2024-12-09 15:20:10.175450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.175472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.175646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.175839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.175862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.176778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.176801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.177944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.177966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.178128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.178151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.178324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.178347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.178600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.178623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.178754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.178907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.178928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.179115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.179137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.179303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.179326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.179483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.179506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.179664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.179686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.179774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.179797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.180878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.180994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.181184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.181436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.181560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.181676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.181811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.181836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.182007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.182124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.182146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.182264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.665 [2024-12-09 15:20:10.182287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.665 qpair failed and we were unable to recover it. 00:27:08.665 [2024-12-09 15:20:10.182389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.182515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.182538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.182639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.182898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.182921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.183976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.183999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.184248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.184272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.184370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.184392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.184558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.184581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.184736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.184758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.184917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.184940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.185204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.185250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.185486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.185510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.185683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.185706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.185817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.185840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.185952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.185975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.186141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.186163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.186430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.186453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.186549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.186571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.186813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.186839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.187942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.187965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.188186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.188208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.188412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.188436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.188611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.188765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-12-09 15:20:10.188787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-12-09 15:20:10.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.188996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.189178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.189377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.189401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.189634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.189656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.189765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.189787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.189939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.190127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.190149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.190301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.190325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.190495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.190517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.190735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.190758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.190911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.190934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.191050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.191239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.191262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.191511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.191533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.191704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.191726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.191891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.192109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.192378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.192489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.192695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.192823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.192988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.193931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.193953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.194138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.194324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.194436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.194620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.194797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.194984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.195006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.195180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.195202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.195382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.195405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.195634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.195655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.195816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.195838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-12-09 15:20:10.196059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-12-09 15:20:10.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.196874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.196897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.197062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.197084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.197304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.197328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.197548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.197570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.197720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.197742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.197890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.197912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.198099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.198121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.198281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.198305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.198479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.198501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.198724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.198747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.198965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.199092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.199118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.199271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.199295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.199521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.199544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.199804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.200023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.200435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.200611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.200821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.200989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.201167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.201452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.201572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.201767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.201901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.201926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.202159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.202180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.202609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.202632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.202716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.202736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.202816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.202837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.203021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.203042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.203192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.203214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.203371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-12-09 15:20:10.203394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-12-09 15:20:10.203564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.203587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.203686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.203707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.203874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.203895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.204999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.205284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.205473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.205657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.205789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.205906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.205927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.206883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.206904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.207961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.207983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.208941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.208962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.209096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.209271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.209426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.209602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.209734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-12-09 15:20:10.209981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-12-09 15:20:10.210003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.210160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.210182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.210287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.210308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.210470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.210493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.210739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.210761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.210998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.211176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.211198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.211307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.211329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.211492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.211515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.211689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.211711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.211899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.211921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.212820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.212840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.213830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.213851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.214894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.214915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.215937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.215957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.216111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.216133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.216240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.216263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.216600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-12-09 15:20:10.216622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-12-09 15:20:10.216712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.216970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.216992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.217904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.217926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.218880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.218901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.219838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.219999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.220026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.220242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.220265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.220467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.220623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.220646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.220800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.221380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.221582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.221755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.221941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.221963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.222095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.222289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.222425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-12-09 15:20:10.222798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-12-09 15:20:10.222889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.222909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.223924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.223946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.224143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.224166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.224342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.224513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.224536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.224704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.224726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.224967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.224993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.225107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.225130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.225253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.225277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.225446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.225468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.225640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.225663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.225815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.225837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.226005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.226026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.226176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.226198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.226392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.226415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.226582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.226604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.226849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.227952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.227974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.228901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.228923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.229076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.229098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.229246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.229269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.229421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.229443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-12-09 15:20:10.229548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-12-09 15:20:10.229570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.229730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.229752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.229908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.229930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.230867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.230890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.231860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.231883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.232066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.232088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.232295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.232319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.232497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.232520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.232620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.232643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.232860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.232882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.233063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.233086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.233261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.233285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.233405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.233427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.233577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.233600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.233857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.233880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.234923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.234944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.235970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.235992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.236094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.236117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-12-09 15:20:10.236278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-12-09 15:20:10.236302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.236477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.236504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.236604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.236625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.236795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.236817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.236921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.237903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.237924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.238839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.239025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.239047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.239216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.239246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.239431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.239454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.239556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.239579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.239739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.239761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.240886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.240909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.241128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.241305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.241496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.241621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.241894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.241979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.242904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-12-09 15:20:10.242926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-12-09 15:20:10.243041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.243242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.243440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.243554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.243680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.243854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.243877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.244972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.244995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.245915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.245937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.246857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.246880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.247088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.247110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.247211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.247241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.247440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.247462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.247704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.247924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.247946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.248181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.248263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.248477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.248514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.248638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-12-09 15:20:10.248671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-12-09 15:20:10.248877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.248909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.249832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.249854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.250786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.250986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.251948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.251971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.252944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.252966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.253964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.253986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.254134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.254156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.254326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.254349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.254535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.254558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.254667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.254689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.254851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.254873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.255094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.255116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-12-09 15:20:10.255307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-12-09 15:20:10.255330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.255572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.255595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.255674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.255695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.255842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.255865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.255996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.256971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.256992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.257839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.257872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.258857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.258880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.259858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.259890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.260149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.260181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.260372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.260405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.260593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.260801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.260834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.260954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.260986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.261249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.261282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.261397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.261430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.261624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.261656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.261871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.262076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.262108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-12-09 15:20:10.262305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-12-09 15:20:10.262344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.262586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.262618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.262724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.262757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.262992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.263205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.263325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.263497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.263613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.263884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.263916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.264209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.264395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.264427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.264550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.264688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.264720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.265860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.265884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.266892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.266915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.267925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.267948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.268123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.268146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.268298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.268322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.268494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.268517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.268686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.268709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.268881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.268905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.269029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.269051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.269162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.269185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.269295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-12-09 15:20:10.269320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-12-09 15:20:10.269411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.269432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.269534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.269557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.269663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.269686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.269865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.269887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.270809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.270832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.271934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.271956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.272147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.272287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.272419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.272529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.272722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.272979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.273890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.273912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.274932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.274954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.275132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.275155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.275315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.275340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.275453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.275475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.275644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.275667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-12-09 15:20:10.275887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-12-09 15:20:10.275910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.276072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.276094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.276260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.276283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.276365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.276386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.276656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.276679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.276855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.276877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.277041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.277063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.277280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.277304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.277499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.277521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.277693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.277715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.277867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.277890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.278045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.278067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.278251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.278275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.278552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.278574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.278744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.278766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.278867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.278890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.279932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.279955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.280139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.280162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.280327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.280350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.280591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.280778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.280801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.280890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.280912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.281942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.282081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.282113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.282264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.282297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.282426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-12-09 15:20:10.282459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-12-09 15:20:10.282630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.282662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.282847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.282869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.282971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.282994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.283093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.283116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.283333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.283357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.283521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.283554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.283683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.283715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.283909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.283942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.284907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.284938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.285958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.285983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.286149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.286172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.286269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.286291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.286461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.286494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.286681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.286713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.286950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.286983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.287233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.287257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.287419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.287442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.287613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.287636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.287777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.287799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.287962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.287985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.288954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.288974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.289132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.289311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.289334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-12-09 15:20:10.289434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-12-09 15:20:10.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.289540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.289563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.289710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.289731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.289829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.289851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.289980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.290002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.290154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.290177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.290303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.290327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.290550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.290593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.290778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.290811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.290985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.291017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.291258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.291330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.291623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.291660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.291766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.291798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.292056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.292088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.292207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.292250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.292382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.292413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.292657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.292689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.292930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.292962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.293148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.293180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.293471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.293496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.293610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.293632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.293752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.293920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.293942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.294165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.294197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.294389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.294422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.294537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.294570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.294740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.294772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.294889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.294922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.295059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.295495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.295528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.295814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.295846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.296123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.296145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.296435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.296458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.296630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.296653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.296819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.296860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.297119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.297151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.297277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.297314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.297513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.297546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.297673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.297704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-12-09 15:20:10.297837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-12-09 15:20:10.297868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.298937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.298969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.299215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.299259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.299435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.299468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.299585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.299619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.299884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.299926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.300054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.300086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.300213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.300257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.300520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.300554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.300685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.300718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.300955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.300980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.301253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.301286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.301529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.301805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.301940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.301973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.302111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.302143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.302322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.302355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.302601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.302632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.302997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.303029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.303353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.303658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.303857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.304134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.304167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.304458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.304492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.304736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.304943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.304975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.305161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.305193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.305386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.305419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.305532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.305564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.305748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.305780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.305958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.305990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.306177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.306206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.306323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-12-09 15:20:10.306591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-12-09 15:20:10.306613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.306748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.306770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.306871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.306893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.307964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.307985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.308160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.308283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.308474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.308602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.308794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.308981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.309003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.309226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.309249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.309478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.309499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.309690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.309712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.309871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.309894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.310909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.310931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.311969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.311991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.312082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.312103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.312196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.312225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.312314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.312335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-12-09 15:20:10.312414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-12-09 15:20:10.312435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.312591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.312613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.312783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.312805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.312907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.312928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.313103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.313125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.313241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.313265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.313385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.313407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.313599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.313621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.313722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.313742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.314976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.314998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.315267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.315407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.315712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.315832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.315988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.316897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.316920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.317843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.317866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.318884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.319068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-12-09 15:20:10.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-12-09 15:20:10.319202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.319572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.319693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.319803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.319920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.319942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.320977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.320998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.321192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.321214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.321375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.321397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.321552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.321575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.321737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.321759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.321863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.321885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.322872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.323954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.324833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.324855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.325041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.325063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.325249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.325272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-12-09 15:20:10.325432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-12-09 15:20:10.325455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.325618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.325640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.325763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.325981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.326095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.326332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.326712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.326827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.326850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.327833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.327981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.328003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.328254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.328277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.328544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.328567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.328673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.328696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.328803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.328825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.328995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.329972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.329992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.330964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.330990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.331890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-12-09 15:20:10.331912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-12-09 15:20:10.332081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.332888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.332911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.333888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.333910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.334924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.334947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.335896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.335992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.336014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.336158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.336246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.336322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93460 (9): Bad file descriptor 00:27:08.688 [2024-12-09 15:20:10.336528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.336600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.336749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.336784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.336906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.336939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.337048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.337081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.337366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.337403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.337571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.337597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.337757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.337779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.338024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.338198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.338227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.338396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.338418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-12-09 15:20:10.338574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-12-09 15:20:10.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.338827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.338849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.339920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.339943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.340963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.340985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.341076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.341099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.341346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.341370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.341539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.341561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.341718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.341740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.341942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.341964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.342136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.342159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.342271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.342296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.342392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.342415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.342677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.342699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.342936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.342958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.343068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.343090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.343203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.343233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.343475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.343498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.343658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.343679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.343776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.343798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.344038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.344172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.344284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.344460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-12-09 15:20:10.344569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-12-09 15:20:10.344731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.344754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.344847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.344873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.344974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.344996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.345839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.345862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.346094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.346117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.346248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.346272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.346489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.346511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.346673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.346695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.346857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.347791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.347813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.348906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.348928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.349090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.349112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.349278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.349305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.349399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.349421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.349673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.349866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.349888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.350929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.350951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.351145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-12-09 15:20:10.351167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-12-09 15:20:10.351332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.351355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.351461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.351483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.351590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.351612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.351836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.351859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.351960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.351982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.352950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.352972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.353125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.353148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.353422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.353446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.353659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.353682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.353785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.353808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.353908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.353931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.354103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.354135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.354324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.354358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.354490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.354522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.354636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.354668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.354906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.354938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.355057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.355088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.355272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.355305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.355408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.355441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.355627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.355658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.355868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.355900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.356070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.356101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.356213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.356256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.356460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.356492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.356731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.356763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.356884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.356917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.357041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.357073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.357308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.357533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.357565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.357698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.357729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.357912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.357945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.358188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.358211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.358378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.358402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.358565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.358598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.358698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.358732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.358925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-12-09 15:20:10.358957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-12-09 15:20:10.359154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.359187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.359404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.359427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.359664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.359686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.359923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.359956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.360190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.360232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.360418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.360449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.360710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.360742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.360945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.360976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.361147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.361169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.361412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.361445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.361692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.361725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.361899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.361932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.362108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.362130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.362292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.362316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.362528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.362551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.362736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.362759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.362948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.362975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.363203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.363231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.363405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.363436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.363705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.363745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.363931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.363963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.364168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.364190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.364384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.364408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.364513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.364536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.364721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.364744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.364851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.365055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.365087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.365209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.365283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.365429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.365700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.365732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.365852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.365885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.366043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.366214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.366259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.366498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.366521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.366700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.366875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.367076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.367110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.367312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.367346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.367612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.367645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.367907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.368124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.368147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.368297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.368320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.368425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-12-09 15:20:10.368448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-12-09 15:20:10.368557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.368584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.368735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.368757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.368857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.368880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.369129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.369153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.369332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.369355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.369557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.369590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.369822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.369997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.370970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.370993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.371256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.371476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.371498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.371748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.371782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.371990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.372022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.372201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.372250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.372491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.372514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.372684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.372707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.372867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.372899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.373116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.373148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.373364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.373399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.373614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.373647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.373840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.373873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.374106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.374129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.374290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.374318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.374487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.374510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.374738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.374760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.375007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.375029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.375184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.375469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.375503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.375743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.375775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.375906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.375948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.376147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.376391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.376415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.376525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.376559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.376757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-12-09 15:20:10.376789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-12-09 15:20:10.376985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.377142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.377263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.377446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.377648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.377883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.377915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.378046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.378080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.378290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.378324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.378518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.378541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.378762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.378794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.379118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.379380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.379403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.379635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.379668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.379866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.379897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.380081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.380114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.380371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.380395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.380550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.380573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.380833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.380865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.381085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.381118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.381385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.381419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.381635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.381668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.381840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.381873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.382092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.382124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.382390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.382424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.382549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.382582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.382774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.382806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.383042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.383065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.383238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-12-09 15:20:10.383272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-12-09 15:20:10.383386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.383419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.383670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.383743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.383964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.384002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.384262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.384299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.384504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.384530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.384783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.384806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.384969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.384992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.385250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.385276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.385403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.385665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.385697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.385823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.385855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.385975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.386007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.386273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.386296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.386413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.386436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.386673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.386696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.386867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.386890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.387055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.387078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.387319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.387353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.387546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.387577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.387815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.387848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.388027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.388050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.388236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.388270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.388543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.388575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.388742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.388773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.388955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.388986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.389204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.389247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.389454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.389486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.389699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.389731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.390016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.390062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.390297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.390322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.390565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.390587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.390756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.390789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.391036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.391069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.391253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.391301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.391474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.391497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.391667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.391700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.391876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.391909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.392100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.392140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.392367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.392391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.392626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.392649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-12-09 15:20:10.392802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-12-09 15:20:10.392824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.393011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.393203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.393245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.393426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.393449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.393619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.393642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.393891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.393923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.394099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.394132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.394395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.394419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.394590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.394622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.394862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.394894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.395088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.395121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.395386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.395409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.395563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.395585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.395827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.395850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.396909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.396940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.397196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.397236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.397446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.397757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.397789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.397910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.397942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.398146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.398168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.398388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.398422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.398667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.398700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.398883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.398914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.399154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.399185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.399376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.399409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.399650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.399681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.400279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.400491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.400523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.400785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.400818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.400993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.401025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.401292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.401326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.401533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.401791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.402030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.402062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.402324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.402366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-12-09 15:20:10.402545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-12-09 15:20:10.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.402832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.402858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.403921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.403953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.404236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.404271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.404532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.404565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.404765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.404797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.404957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.405197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.405237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.405420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.405443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.405688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.405719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.405901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.405934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.406208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.406407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.406440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.406681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.406713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.407013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.407045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.407306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.407340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.407583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.407616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.407863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.407896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.408151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.408173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.408355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.408379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.408569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.408603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.408807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.408839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.409136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.409168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.409478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.409512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.409763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.409796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.410038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.410071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.410328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.410526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.410549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.410652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.410674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.410921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.410944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.411127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.411311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.411334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.411516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-12-09 15:20:10.411549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-12-09 15:20:10.411813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.411845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.411953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.411986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.412274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.412308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.412480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.412502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.412724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.412762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.412900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.412932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.413061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.413093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.413355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.413388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.413652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.413685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.413875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.413907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.414103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.414136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.414312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.414346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.414495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.414518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.414688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.414711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.414978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.415001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.415180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.415203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.415461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.415495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.415809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.415841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.416069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.416102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.416385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.416420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.416530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.416553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.416798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.416821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.417006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.417028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.417271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.417295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.417464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.417487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.417738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.417762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.417934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.417956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-12-09 15:20:10.418175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-12-09 15:20:10.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.977 [2024-12-09 15:20:10.418491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.977 [2024-12-09 15:20:10.418515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.977 qpair failed and we were unable to recover it. 00:27:08.977 [2024-12-09 15:20:10.418672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.977 [2024-12-09 15:20:10.418694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.977 qpair failed and we were unable to recover it. 00:27:08.977 [2024-12-09 15:20:10.418913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.977 [2024-12-09 15:20:10.418937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.977 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.419121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.419148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.419322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.419346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.419591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.419614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.419833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.419857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.420019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.420042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.420231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.420255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.420477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.420500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.420662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.420685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.420923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.420946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.421141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.421164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.421436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.421460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.421728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.421751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.421994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.422175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.422198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.422439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.422462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.422622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.422645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.422886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.422909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.422999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.423021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.423176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.423365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.423389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.423555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.423578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.423803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.423826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.424971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.424998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.425171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.425195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.425487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.425645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.425677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.425923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.425956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.426138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.426161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.426379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.426403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.426632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.426656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.426836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.978 [2024-12-09 15:20:10.426860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.978 qpair failed and we were unable to recover it. 00:27:08.978 [2024-12-09 15:20:10.427015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.427038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.427296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.427320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.427576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.427599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.427793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.427817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.428041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.428064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.428325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.428350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.428589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.428623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.428868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.428900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.429191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.429233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.429492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.429525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.429774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.429848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.430139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.430176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.430508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.430723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.430757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.431015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.431048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.431288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.431323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.431500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.431527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.431684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.431707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.431825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.432008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.432031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.432306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.432330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.432523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.432546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.432727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.432749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.432975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.432998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.433188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.433480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.433514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.433713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.433746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.434141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.434373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.434565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.434696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.434885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.434909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.435073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.435251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.435275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.979 [2024-12-09 15:20:10.435490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.979 qpair failed and we were unable to recover it. 00:27:08.979 [2024-12-09 15:20:10.435682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.435715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.435908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.435941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.436163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.436196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.436468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.436501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.436715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.436747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.436933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.436966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.437092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.437116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.437315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.437339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.437494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.437518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.437753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.437786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.438934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.438956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.439115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.439138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.439342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.439366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.439615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.439639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.439832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.439864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.440135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.440168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.440510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.440544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.440756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.440788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.440973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.441006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.441280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.441315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.441543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.441659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.441692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.441985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.442018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.442518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.442551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.442736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.442769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.443013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.443045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.443227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.443251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.443372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.443396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.443513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.443552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.443810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.443843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.444056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.444090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.444363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.980 [2024-12-09 15:20:10.444404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.980 qpair failed and we were unable to recover it. 00:27:08.980 [2024-12-09 15:20:10.444672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.444714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.444989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.445810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.445982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.446016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.446261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.446296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.446546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.446578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.446779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.446812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.446935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.446968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.447246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.447287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.447483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.447517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.447695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.447729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.447943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.447975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.448237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.448273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.448446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.448469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.448575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.448609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.448842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.449002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.449034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.449213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.449245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.449428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.449451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.449675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.449699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.449866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.449890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.450078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.450101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.450261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.450285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.450450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.450474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.450723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.450747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.451011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.451035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.451254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.451280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.451460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.451484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.451713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.451736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.451938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.451971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.452110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.452145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.452333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.452367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.452516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.452765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.452799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.981 [2024-12-09 15:20:10.453069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.981 [2024-12-09 15:20:10.453103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.981 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.453238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.453389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.453413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.453642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.453676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.453940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.453972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.454184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.454237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.454446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.454479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.454678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.454710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.454931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.454965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.455145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.455168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.455278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.455303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.455480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.455504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.455663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.455687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.455895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.455928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.456113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.456351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.456387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.456524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.456557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.456872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.456907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.457043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.457077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.457346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.457380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.457636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.457660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.457789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.457812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.457992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.458016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.458121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.458145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.458303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.458327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.458606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.458640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.458926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.458959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.459151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.459185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.459454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.459478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.459649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.459673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.460254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.982 [2024-12-09 15:20:10.460288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.982 qpair failed and we were unable to recover it. 00:27:08.982 [2024-12-09 15:20:10.460463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.460486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.460654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.460678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.460782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.460805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.461071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.461095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.461429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.461463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.461649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.461681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.461875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.461910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.462090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.462123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.462340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.462365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.462544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.462568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.462802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.462827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.462965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.462998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.463190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.463236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.463455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.463707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.463730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.463929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.463952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.464206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.464239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.464476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.464671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.464695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.464940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.464974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.465151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.465185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.465394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.465429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.465651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.465859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.465892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.466097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.466134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.466257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.466307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.466561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.466737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.466762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.466935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.466959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.467067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.467090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.467341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.467365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.467523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.467549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.467763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.467787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.467896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.467918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.468163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.468190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.468413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.468574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.983 [2024-12-09 15:20:10.468606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.983 qpair failed and we were unable to recover it. 00:27:08.983 [2024-12-09 15:20:10.468880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.468921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.469112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.469145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.469343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.469368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.469475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.469497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.469716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.469740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.469964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.470149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.470173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.470429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.470453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.470648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.470672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.470784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.470807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.471060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.471085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.471339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.471364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.471478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.471501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.471795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.472105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.472138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.472333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.472369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.472521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.472554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.472691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.472735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.472990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.473017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.473198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.473231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.473504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.473528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.473687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.473710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.474005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.474038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.474165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.474198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.474443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.474478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.474712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.474877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.474903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.475152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.475192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.475426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.475461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.475664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.475818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.475851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.476103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.476136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.476350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.476375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.476547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.476580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.476773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.476806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.476999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.477032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.984 [2024-12-09 15:20:10.477306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.984 [2024-12-09 15:20:10.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.984 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.477472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.477496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.477685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.477718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.477867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.477904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.478086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.478119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.478335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.478370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.478650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.478684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.478986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.479259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.479446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.479579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.479783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.479966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.479991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.480176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.480213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.480349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.480382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.480565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.480598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.480849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.480882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.481089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.481124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.481305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.481457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.481480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.481688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.481880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.481904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.482080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.482351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.482386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.482588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.482621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.482777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.482810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.483010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.483043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.483295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.483330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.483518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.483552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.483851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.483884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.484186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.484234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.484443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.484477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.484677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.484711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.485021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.485148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.485181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.485450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.485488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.485678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.485703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.485878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.985 [2024-12-09 15:20:10.485901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.985 qpair failed and we were unable to recover it. 00:27:08.985 [2024-12-09 15:20:10.486091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.486115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.486225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.486252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.486348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.486372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.486554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.486854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.486888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.487186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.487233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.487367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.487600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.487948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.487982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.488215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.488521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.488784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.488818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.489011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.489043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.489349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.489384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.489514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.489553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.489716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.489751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.490005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.490259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.490295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.490544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.490568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.490801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.490824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.490933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.490954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.491139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.491174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.491408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.491443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.491723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.491757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.492057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.492092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.492319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.492345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.492455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.492477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.492672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.492706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.492945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.492978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.493243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.493279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.493535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.493569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.493775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.493809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.494077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.494404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.494439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.494669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.494703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.494987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.495021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.495146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.495179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.495423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.495467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.986 qpair failed and we were unable to recover it. 00:27:08.986 [2024-12-09 15:20:10.495730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.986 [2024-12-09 15:20:10.495755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.495945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.495969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.496149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.496174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.496292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.496484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.496508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.496774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.496807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.496948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.497168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.497307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.497342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.497559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.497592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.497871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.497900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.498142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.498166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.498282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.498307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.498496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.498520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.498660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.498684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.498893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.498916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.499093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.499118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.499368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.499403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.499599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.499635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.499753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.499785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.500105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.500141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.500402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.500438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.500684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.500717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.500849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.500884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.501174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.501208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.501365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.501399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.501656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.501689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.501958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.501992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.502199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.502502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.502536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.502732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.502766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.502905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.502939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.503193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.503241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.503460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.503485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.503731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.987 [2024-12-09 15:20:10.503756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.987 qpair failed and we were unable to recover it. 00:27:08.987 [2024-12-09 15:20:10.503957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.503981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.504170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.504194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.504428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.504695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.504984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.505008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.505249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.505276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.505481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.505505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.505713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.505738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.505865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.506125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.506150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.506283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.506571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.506612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.506919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.506954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.507241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.507421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.507455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.507658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.507683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.507956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.507980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.508073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.508095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.508259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.508284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.508487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.508522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.508729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.508763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.508997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.509031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.509315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.509350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.509560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.509594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.509860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.509885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.510085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.510110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.510298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.510322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.510440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.510464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.510713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.510737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.510882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.510907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.511107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.511131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.511300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.511324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.511619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.511653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.511841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.511874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.512132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.512380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.512415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.512569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.512594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.512805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.512830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.988 [2024-12-09 15:20:10.512932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.988 [2024-12-09 15:20:10.512954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.988 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.513178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.513440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.513465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.513650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.513675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.513787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.513820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.514109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.514143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.514359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.514593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.514626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.514826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.514851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.515049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.515083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.515277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.515312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.515590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.515625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.515817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.515842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.516078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.516111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.516295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.516329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.516585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.516628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.516725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.516751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.516948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.516981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.517264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.517299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.517578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.517612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.517799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.517833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.518034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.518068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.518252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.518277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.518394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.518418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.518585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.518611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.518736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.518760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.519006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.519031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.519237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.519261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.519447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.519471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.519641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.519675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.519828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.519861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.520161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.520195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.520427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.520469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.520682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.520716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.521035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.521068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.521352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.521387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.521509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.521543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.521683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.521720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.521985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.522010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.989 [2024-12-09 15:20:10.522174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.989 [2024-12-09 15:20:10.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.989 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.522413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.522447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.522728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.522762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.523020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.523140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.523174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.523341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.523376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.523509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.523544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.523781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.523816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.524013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.524048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.524319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.524345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.524535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.524571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.524780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.524815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.525096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.525130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.525374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.525575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.525609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.525759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.525783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.525984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.526019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.526160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.526448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.526484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.526632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.526657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.526879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.527123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.527158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.527347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.527382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.527640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.527675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.527917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.527942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.528130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.528156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.528324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.528350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.528489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.528627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.528851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.528887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.529043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.529078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.529341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.529384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.529553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.529577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.529694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.529728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.530012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.530046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.530173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.530208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.530429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.530454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.530622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.530647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.530822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.530847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.531032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.990 [2024-12-09 15:20:10.531079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.990 qpair failed and we were unable to recover it. 00:27:08.990 [2024-12-09 15:20:10.531352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.531387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.531637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.531681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.531977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.532002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.532195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.532273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.532419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.532453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.532591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.532627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.532776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.532820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.532980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.533009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.533192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.533228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.533350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.533384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.533600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.533634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.533865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.533900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.534175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.534209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.534363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.534398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.534542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.534567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.534775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.534810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.535010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.535045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.535248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.535284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.535416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.535450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.535660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.535694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.535828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.535863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.536127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.536209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.536467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.536507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.536770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.537086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.537121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.537323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.537358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.537553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.537761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.537795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.538052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.538087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.538285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.538539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.538574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.538778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.538813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.539015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.539049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.539253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.539281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.991 [2024-12-09 15:20:10.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.991 [2024-12-09 15:20:10.539515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.991 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.539670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.539703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.539918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.539953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.540139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.540173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.540394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.540431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.540650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.540685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.540892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.540926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.541131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.541166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.541388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.541422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.541658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.541704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.542065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.542289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.542316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.542525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.542559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.542771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.542805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.542991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.543027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.543185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.543231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.543388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.543412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.543670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.543905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.543939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.544243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.544302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.544465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.544498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.544706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.544741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.545072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.545106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.545386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.545411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.545551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.545575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.545679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.545700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.545962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.546248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.546274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.546412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.546437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.546609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.546634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.546811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.546836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.547028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.547063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.547250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.547464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.547499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.547710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.547746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.547982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.548015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.548214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.548277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.548536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.992 [2024-12-09 15:20:10.548560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.992 qpair failed and we were unable to recover it. 00:27:08.992 [2024-12-09 15:20:10.548697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.548721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.548856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.548881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.549120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.549200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.549427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.549466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.549723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.549759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.549974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.550010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.550234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.550270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.550504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.550540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.550686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.550721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.551020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.551054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.551316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.551353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.551559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.551595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.551808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.551843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.552074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.552109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.552346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.552525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.552569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.552802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.552838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.553069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.553275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.553311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.553447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.553481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.553636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.553675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.553906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.553940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.554232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.554268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.554413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.554446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.554725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.554759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.555092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.555126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.555345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.555381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.555517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.555550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.555803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.555827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.556016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.556042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.556294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.556319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.556501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.556524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.556662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.556687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.556874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.556899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.557063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.557088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.557300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.557326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.557519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.557543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.557735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.993 [2024-12-09 15:20:10.557759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.993 qpair failed and we were unable to recover it. 00:27:08.993 [2024-12-09 15:20:10.557974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.557999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.558161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.558186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.558403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.558428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.558557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.558581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.558716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.558745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.558944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.558968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.559092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.559116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.559229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.559253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.559389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.559414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.559667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.559692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.560906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.560929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.561192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.561216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.561402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.561427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.561625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.561662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.561827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.561862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.562049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.562084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.562327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.562365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.562573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.562611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.562804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.562838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.563114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.563299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.563479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.563512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.563736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.563771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.564039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.564074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.564274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.564310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.564579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.564614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.564745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.564785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.564988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.565022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.565254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.565288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.565425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.565458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.565664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.565699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.566069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.566105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.566248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.994 [2024-12-09 15:20:10.566284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.994 qpair failed and we were unable to recover it. 00:27:08.994 [2024-12-09 15:20:10.566478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.566512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.566965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.566999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.567131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.567164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.567305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.567334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.567542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.567567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.567687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.567712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.567895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.567920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.568120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.568145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.568327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.568351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.568540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.568565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.568799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.568823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.568948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.568972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.569069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.569092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.569215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.569246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.569442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.569469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.569624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.569732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.569757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.570036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.570060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.570246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.570271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.570532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.570560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.570737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.570761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.570974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.571246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.571272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.571531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.571645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.571667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.571845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.571869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.572047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.572071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.572333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.572358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.572860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.572884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.573171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.573486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.573511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.573700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.573725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.573862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.573888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.574173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.574316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.574341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.574528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.574553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.574671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.995 [2024-12-09 15:20:10.574698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.995 qpair failed and we were unable to recover it. 00:27:08.995 [2024-12-09 15:20:10.574907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.574938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.575195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.575233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.575410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.575438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.575615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.575642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.575823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.575852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.576104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.576130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.576263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.576288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.576491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.576516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.576628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.576657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.576926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.576950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.577152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.577177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.577408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.577434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.577613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.577640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.577808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.577833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.578031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.578230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.578368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.578567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.578825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.578988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.579012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.579304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.579329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.579501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.579525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.579724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.579843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.579867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.580080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.580319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.580344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.580516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.580540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.996 [2024-12-09 15:20:10.580725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.996 [2024-12-09 15:20:10.580749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.996 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.581936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.581961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.582151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.582176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.582450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.582475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.582718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.582742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.582857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.582881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.583062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.583086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.583388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.583413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.583586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.583610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.583730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.583760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.583898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.583922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.584033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.584055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.584332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.584446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.584472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.584584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.584608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.584869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.585073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.585099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.585321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.585348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.585589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.585612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.585776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.585801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.585994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.586132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.586265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.586525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.586726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.586946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.586971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.587162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.587187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.587432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.587618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.587641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.587823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.587847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.588109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.588133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.588422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.588618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.588642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.588823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.997 [2024-12-09 15:20:10.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.997 qpair failed and we were unable to recover it. 00:27:08.997 [2024-12-09 15:20:10.589044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.589068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.589355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.589380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.589544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.589750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.589774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.589903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.589927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.590031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.590055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.590313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.590583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.590608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.590733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.590758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.591010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.591035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.591196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.591233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.591355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.591379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.591648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.591673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.591894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.591917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.592100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.592125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.592369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.592395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.592571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.592595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.592778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.592803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.593035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.593060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.593252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.593470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.593495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.593673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.593697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.594003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.594245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.594271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.594404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.594428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.594689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.594713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.594895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.594919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.595149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.595174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.595374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.595399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.595590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.595615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.595795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.595819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.595982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.596006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.596264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.596290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.596523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.596547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.998 [2024-12-09 15:20:10.596808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.998 [2024-12-09 15:20:10.596832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.998 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.596996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.597019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.597256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.597280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.597456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.597485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.597599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.597623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.597881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.597906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.598150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.598174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.598476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.598500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.598734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.598759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.599031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.599054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.599234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.599259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.599421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.599445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.599634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.599657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.599859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.599884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.600069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.600093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.600343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.600369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.600570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.600595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.600735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.600759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.601010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.601034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.601252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.601276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.601463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.601487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.601742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.601767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.601971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.601995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.602310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.602335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.602548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.602572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.602684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.602845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.602869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.603006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.603030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.603239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.603265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.603495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.603519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.603817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.603842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.604021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.604046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.604332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.604356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.604546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.604572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.604849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.604875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.605119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.605144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.605393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.605418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.605594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.999 [2024-12-09 15:20:10.605619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:08.999 qpair failed and we were unable to recover it. 00:27:08.999 [2024-12-09 15:20:10.605751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.605776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.605983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.606250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.606275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.606403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.606428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.606641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.606783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.606808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.607068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.607093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.607342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.607367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.607532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.607556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.607678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.607701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.607969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.607993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.608313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.608339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.608602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.608626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.608797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.608822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.609024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.609048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.609248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.609273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.609387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.609409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.609672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.609697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.609949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.609974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.610136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.610348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.610373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.610564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.610590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.610792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.610815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.611010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.611284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.611524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.611681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.611706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.611826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.611852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.612046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.612290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.612316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.612434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.612459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.612627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.612651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.612911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.612936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.613134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.613162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.613301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.613327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.613514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.613538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.613740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.614028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.614161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.000 [2024-12-09 15:20:10.614186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.000 qpair failed and we were unable to recover it. 00:27:09.000 [2024-12-09 15:20:10.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.614429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.614616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.614639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.614825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.614849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.615086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.615111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.615292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.615317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.615551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.615576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.615712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.615736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.616941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.616965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.617142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.617167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.617337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.617362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.617490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.617513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.617776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.617800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.617999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.618023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.618315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.618551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.618577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.618703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.618928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.618958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.619225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.619250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.619483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.619507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.619693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.619718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.619946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.619970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.620273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.620298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.620573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.620764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.620789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.620971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.620995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.621109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.621133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.621247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.621271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.621394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.621415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.621591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.001 [2024-12-09 15:20:10.621615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.001 qpair failed and we were unable to recover it. 00:27:09.001 [2024-12-09 15:20:10.621753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.621913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.621937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.622915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.622936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.623123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.623291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.623423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.623578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.623764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.623990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.624015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.624237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.624268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.624462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.624487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.624623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.624648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.624902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.624927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.625121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.625146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.625350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.625375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.625491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.625514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.625699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.625722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.626003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.626028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.626313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.626338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.626615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.626751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.626777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.627018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.627044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.627241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.627265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.627501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.627525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.627732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.627756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.627860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.627884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.628141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.628167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.628395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.628422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.628624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.628649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.628910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.629080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.629104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.629353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.629378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.629562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.629586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.629767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.002 [2024-12-09 15:20:10.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.002 qpair failed and we were unable to recover it. 00:27:09.002 [2024-12-09 15:20:10.629932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.629956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.630196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.630232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.630414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.630438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.630547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.630571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.630877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.630902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.631158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.631183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.631294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.631490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.631515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.631705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.631729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.631868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.631892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.632136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.632160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.632365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.632389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.632484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.632505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.632765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.632788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.632877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.632899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.633084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.633109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.633376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.633402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.633538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.633725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.633854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.633876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.634060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.634083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.634274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.634300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.634405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.634426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.634662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.634686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.634827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.634851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.635041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.635065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.635427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.635451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.635630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.635655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.635873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.635897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.636170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.636195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.636440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.636466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.636640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.636664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.636802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.636827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.637021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.637045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.637146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.637168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.637406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.637432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.637537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.003 [2024-12-09 15:20:10.637561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.003 qpair failed and we were unable to recover it. 00:27:09.003 [2024-12-09 15:20:10.637696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.637720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.637823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.637844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.638008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.638033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.638248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.638508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.638726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.638754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.638942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.638966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.639157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.639180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.639385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.639411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.639588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.639612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.639798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.639822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.640057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.640081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.640188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.640212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.640507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.640532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.640727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.640751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.640954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.640979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.641185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.641208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.641541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.641568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.641878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.641904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.642082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.642105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.642323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.642349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.642530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.642554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.642693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.642717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.642922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.642946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.643179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.643203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.643475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.643499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.643734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.643758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.644038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.644303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.644329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.644568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.644592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.644771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.644794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.645004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.645037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.645275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.645299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.645476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.645499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.645686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.645709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.645829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.645853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.646111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.646135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.646372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.004 [2024-12-09 15:20:10.646397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.004 qpair failed and we were unable to recover it. 00:27:09.004 [2024-12-09 15:20:10.646658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.646682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.646984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.647181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.647205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.647430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.647455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.647634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.647658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.647791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.647815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.648091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.648115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.648393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.648418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.648608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.648631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.648814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.648837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.649012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.649037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.649235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.649259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.649447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.649635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.649658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.649828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.649850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.650060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.650084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.650338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.650362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.650547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.650570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.650690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.650714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.650942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.650965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.651233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.651258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.651501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.651525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.651632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.651656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.651901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.651926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.652131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.652155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.652345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.652370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.652605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.652882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.653141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.653270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.653293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.653470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.653493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.653654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.653678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.653852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.653876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.654055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.005 [2024-12-09 15:20:10.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.005 qpair failed and we were unable to recover it. 00:27:09.005 [2024-12-09 15:20:10.654340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.654365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.654601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.654624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.654751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.654775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.654989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.655013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.655176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.655200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.655518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.655543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.655652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.655674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.655860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.655883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.656115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.656139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.656392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.656417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.656583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.656792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.656815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.657082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.657106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.657285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.657309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.657489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.657514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.657602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.657624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.657806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.657832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.658910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.658934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.659188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.659211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.659390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.659414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.659675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.659698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.659827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.659850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.660823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.660844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.661029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.661303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.661327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.661459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.661482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.661719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.661743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.661871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.661894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.006 [2024-12-09 15:20:10.662154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.006 [2024-12-09 15:20:10.662177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.006 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.662394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.662419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.662605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.662629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.662772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.662796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.663036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.663059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.663235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.663260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.663386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.663409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.663666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.663690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.664001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.664025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.664211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.664246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.664428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.664452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.664639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.664664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.664941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.664965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.665093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.665117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.665340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.665527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.665550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.665730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.665757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.665973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.665996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.666162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.666186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.666382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.666406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.666629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.666653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.666769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.666789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.667016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.667039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.667213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.667258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.667394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.667418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.667604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.667628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.668157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.668186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.668307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.668497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.668521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.668655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.668679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.668797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.668822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.669063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.669087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.669316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.669343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.669536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.669559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.669674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.669698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.669886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.669910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.670012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.670033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.670293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.670317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.007 [2024-12-09 15:20:10.670432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.007 [2024-12-09 15:20:10.670456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.007 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.670725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.670748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.670946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.670970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.671166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.671190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.671374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.671403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.671517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.671541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.671711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.671735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.671984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.672009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.672133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.672156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.672319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.672344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.672469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.672492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.672726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.672749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.672989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.673013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.673294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.673500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.673524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.673635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.673660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.673844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.673867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.674154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.674178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.674380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.674406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.674526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.674549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.674669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.674693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.674972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.674996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.675169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.675192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.675399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.675425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.675603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.675634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.675804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.675827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.676015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.676039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.676184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.676209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.676478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.676716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.676740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.677001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.677024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.677240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.677265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.677507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.677531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.677674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.677697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.677933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.677958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.678235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.678261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.678440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.678464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.678578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.678801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.008 [2024-12-09 15:20:10.678825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.008 qpair failed and we were unable to recover it. 00:27:09.008 [2024-12-09 15:20:10.679058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.679081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.679306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.679332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.679521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.679545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.679667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.679689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.679823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.679847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.680067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.680091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.680242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.680268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.680452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.680476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.680591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.680615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.680805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.680828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.681956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.681978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.682267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.682291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.682403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.682427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.682593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.682617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.682817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.682841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.683056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.683081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.683370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.683396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.683529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.683554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.683738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.683763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.684117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.684141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.684240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.684263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.684455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.684478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.684647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.684671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.684886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.685082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.685107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.685368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.685392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.685511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.685535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.685743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.685768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.686002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.686030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.686226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.686390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.686414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.686578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.686601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.686863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.686887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.009 [2024-12-09 15:20:10.687138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.009 [2024-12-09 15:20:10.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.009 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.687429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.687454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.687638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.687663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.687900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.687925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.688902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.688926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.689114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.689354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.689391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.689533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.689566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.689691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.689726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.689983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.690017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.690233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.690274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.690415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.690448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.690658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.690691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.690887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.690921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.691133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.691169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.691468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.691504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.691646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.691680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.691907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.691947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.692146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.692170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.692428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.692453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.692730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.692825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.692847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.693048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.693081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.693243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.693279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.693496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.693530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.693737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.693761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.693955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.693989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.694252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.694433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.694466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.010 [2024-12-09 15:20:10.694605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.010 [2024-12-09 15:20:10.694638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.010 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.694845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.694878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.695161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.695195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.695448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.695482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.695668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.695800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.695834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.696016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.696050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.696262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.696526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.696560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.696759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.696794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.697019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.697043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.697212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.697247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.697464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.697499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.697735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.697768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.698046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.698080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.698353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.698389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.698591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.698623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.698885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.698910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.699083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.699107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.699292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.699317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.699442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.699466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.699729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.699753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.699871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.700078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.700111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.700267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.700304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.700501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.700534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.700798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.700833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.701052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.701274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.701300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.701561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.701585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.701772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.701806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.701945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.701978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.702243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.702278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.702469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.702502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.702707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.702741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.703036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.703069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.703325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.703360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.703648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.703682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-12-09 15:20:10.703878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-12-09 15:20:10.703911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.704109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.704133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.704390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.704415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.704548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.704572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.704762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.704786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.704982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.705007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.705190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.705215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.705475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.705500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.705694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.705717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.706026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.706049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.706245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.706270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.706396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.706420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.706650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.706674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.706840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.706874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.707059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.707094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.707277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.707441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.707475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.707653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.707686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.707926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.707964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.708239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.708263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.708518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.708541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.708671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.708695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.708887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.708912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.709166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.709190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.709412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.709437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.709528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.709550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.709797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.709833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.710048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.710081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.710275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.710309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.710519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.710552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.710778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.710812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.711045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.711242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.711269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.711578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.711790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.712089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.712255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.712279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-12-09 15:20:10.712406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-12-09 15:20:10.712430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.712687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.712711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.712899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.712923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.713102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.713364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.713389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.713571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.713595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.713796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.713829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.714098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.714132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.714315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.714355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.714579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.714614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.714764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.714797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.714940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.714964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.715124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.715148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.715384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.715409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.715658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.715682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.715908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.715932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.716167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.716191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.716428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.716453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.716665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.716698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.716979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.717003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.717280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.717304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.717477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.717501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.717743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.717766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.718026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.718050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.718310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.718335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.718512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.718536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.718757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.718792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.719841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.719865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.720137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.720161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.720274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.720296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.720428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.720456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.720633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-12-09 15:20:10.720657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-12-09 15:20:10.720930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.721259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.721296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.721555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.721591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.721867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.721908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.722203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.722249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.722457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.722491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.722689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.722722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.722920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.722944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.723172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.723207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.723405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.723438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.723570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.723604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.723844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.723878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.724094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.724127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.724256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.724292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.724498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.724532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.724744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.724777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.725060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.725094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.725293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.725317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.725602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.725626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.725837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.726019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.726053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.726193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.726238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.726468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.726501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.726700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.726734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.726991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.727026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.727214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.727259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.727475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.727508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.727765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.727798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.727988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.728013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.728242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.728277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.728509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.728544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.728749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.728789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.729022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.729046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.729235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-12-09 15:20:10.729261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-12-09 15:20:10.729471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.729495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.729594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.729615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.729829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.729853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.730080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.730340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.730376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.730606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.730640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.730865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.730899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.731030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.731063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.731348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.731577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.731755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.731780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.731988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.732189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.732213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.732361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.732385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.732630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.732654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.732851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.732875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.733001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.733024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.733132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.733157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.733358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.733393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.733555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.733588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.733889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.733922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.734124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.734158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.734461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.734497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.734632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.734665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.734927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.734963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.735285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.735309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.735515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.735540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.735655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.735680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.735922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.735946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.736205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.736248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.736361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.736386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.736517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.736542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.736666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.736698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.736969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.737001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-12-09 15:20:10.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-12-09 15:20:10.737321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.737538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.737573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.737759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.737791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.738070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.738105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.738296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.738321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.738553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.738577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.738775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.738808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.739011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.739045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.739299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.739334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.739576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.739798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.739832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.740080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.740114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.740311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.740336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.740591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.740616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.740747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.740772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.740884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.740919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.741179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.741213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.741436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.741469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.741700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.741735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.741864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.741888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.742074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.742099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.742297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.742322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.742469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.742690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.742723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.742945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.742981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.743272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.743302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.743470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.743494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.743677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.743711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.743948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.743982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.744290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.744325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.744535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.744569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.744682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.744716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.744995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.745029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.745248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.745274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.745452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.745476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.745731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.745756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.745997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-12-09 15:20:10.746119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-12-09 15:20:10.746144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.746335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.746360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.746482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.746505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.746670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.746694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.746889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.747117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.747149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.747360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.747397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.747603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.747636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.747855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.747888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.748182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.748216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.748405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.748429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.748620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.748653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.748855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.748888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.749035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.749204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.749260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.749460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.749494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.749719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.749754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.749970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.750004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.750208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.750243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.750419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.750610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.750634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.750859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.751005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.751163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.751188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.751397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.751432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.751635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.751669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.751891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.751925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-12-09 15:20:10.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-12-09 15:20:10.752232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.752377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.752404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.752644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.752668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.752863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.752887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.753917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.753940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.754138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.754162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.754339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.754364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.754537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.754560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.754745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.754770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.754992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.755017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.755230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.755256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.755441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.755466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.755678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.755703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.755892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.755916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.756188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.756214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.756402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.756426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.756553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.756578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.756771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.756795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.756977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.757000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.757130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.757154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.757353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.757389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.757622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.757806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.757831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.758056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.758089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.758244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.758285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.758488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.758521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.758719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.758752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.758981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.759006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.759191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.759234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.759442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.759475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.759700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.759734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.759880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.759913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.760193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.760238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.760509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.760542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.291 [2024-12-09 15:20:10.760682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.291 [2024-12-09 15:20:10.760715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.291 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.760993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.761027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.761315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.761341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.761621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.761644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.761930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.761954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.762192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.762215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.762409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.762433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.762545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.762569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.762758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.762792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.763068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.763101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.763315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.763340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.763589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.763623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.763880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.763914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.764042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.764076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.764306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.764331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.764520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.764544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.764662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.764686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.764978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.765017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.765302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.765337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.765484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.765517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.765710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.765743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.766026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.766058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.766339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.766374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.766508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.766542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.766758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.766791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.766997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.767031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.767215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.767247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.767485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.767508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.767670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.767694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.767957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.767981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.768235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.768260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.768442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.768467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.768593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.768616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.768724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.768749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.768987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.769011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.769266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.769292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.769491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.769516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.769700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.769849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.292 [2024-12-09 15:20:10.769875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.292 qpair failed and we were unable to recover it. 00:27:09.292 [2024-12-09 15:20:10.769997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.770021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.770349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.770373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.770635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.770661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.770897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.770922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.771137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.771162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.771331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.771361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.771489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.771513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.771645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.771668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.771869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.771892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.772083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.772107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.772323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.772348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.772536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.772560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.772741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.772765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.773025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.773050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.773307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.773332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.773447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.773471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.773682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.773956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.773980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.774094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.774116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.774344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.774368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.774503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.774527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.774705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.774728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.774844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.774867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.775038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.775063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.775338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.775362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.775498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.775521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.775679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.775879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.775903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.776073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.776097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.776326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.776351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.776457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.776480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.776670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.776693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.776868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.776891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.777090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.777123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.777270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.777304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.777508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.777542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.777727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.777760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.777983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.778016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.293 qpair failed and we were unable to recover it. 00:27:09.293 [2024-12-09 15:20:10.778298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.293 [2024-12-09 15:20:10.778332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.778554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.778589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.778734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.778768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.778911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.779201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.779248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.779437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.779470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.779741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.779775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.779910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.779943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.780224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.780252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.780433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.780456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.780667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.780691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.781025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.781049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.781253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.781279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.781465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.781489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.781713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.781736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.782054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.782088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.782373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.782407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.782535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.782568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.782695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.782729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.783058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.783201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.783431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.783591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.783729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.783975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.784008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.784155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.784189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.784441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.784476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.784685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.784718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.784946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.784980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.785263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.785298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.785438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.785471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.785671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.785705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.785859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.785884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.785993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.786017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.786136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.786160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.786397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.786426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.786591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.786616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.786862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.786887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.787054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.294 [2024-12-09 15:20:10.787079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.294 qpair failed and we were unable to recover it. 00:27:09.294 [2024-12-09 15:20:10.787356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.787391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.787600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.787634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.787791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.788047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.788080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.788232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.788270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.788470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.788494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.788685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.788718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.788925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.788960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.789269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.789305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.789465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.789499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.789709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.789733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.789988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.790185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.790363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.790562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.790702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.790953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.790978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.791161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.791185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.791380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.791405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.791589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.791613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.791804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.791827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.792004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.792029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.792230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.792395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.792434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.792639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.792673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.792960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.793177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.793212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.793373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.793406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.793602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.793635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.793890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.794075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.794100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.794284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.794309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.295 [2024-12-09 15:20:10.794502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.295 [2024-12-09 15:20:10.794536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.295 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.794738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.794772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.795043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.795078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.795368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.795403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.795608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.795642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.795908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.795942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.796242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.796268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.796377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.796401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.796584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.796609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.796792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.796816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.797806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.797830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.798003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.798154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.798188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.798454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.798536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.798775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.798856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.799036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.799075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.799384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.799424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.799598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.799633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.799833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.799867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.800153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.800188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.800521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.800564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.800778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.800813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.801063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.801247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.801284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.801430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.801464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.801604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.801638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.801850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.801885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.802194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.802239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.802398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.802433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.802746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.802937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.802972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.803185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.803231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.803437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.803472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.296 [2024-12-09 15:20:10.805050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.296 [2024-12-09 15:20:10.805110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.296 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.805382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.805420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.805630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.805666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.805834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.806088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.806122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.806319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.806571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.806606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.806766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.806809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.807006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.807040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.807257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.807292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.807434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.807469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.809527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.809595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.809950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.809987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.810271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.810309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.810656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.810803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.810837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.811035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.811071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.811292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.811330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.811562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.811598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.811911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.811948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.812137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.812171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.812374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.812411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.812629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.812664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.812829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.812865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.813146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.813181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.813356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.813394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.813551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.813587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.813724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.813760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.814061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.814096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.814349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.814387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.814599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.814635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.814834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.814869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.815066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.815102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.815232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.815265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.815495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.815531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.815861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.815897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.816095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.816130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.816287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.816323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.297 [2024-12-09 15:20:10.816554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.297 [2024-12-09 15:20:10.816590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.297 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.816858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.816893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.817089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.817124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.817396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.817432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.817641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.817677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.817902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.817936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.818176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.818211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.818395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.818431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.818690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.818724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.818966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.819007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.819185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.819231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.819417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.819452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.819746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.819782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.820059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.820093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.820211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.820256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.820453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.820487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.820643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.820678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.821016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.821051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.821210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.821257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.821476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.821510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.821769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.821805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.822006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.822041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.822365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.822596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.822633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.822773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.822808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.823027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.823062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.823323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.823359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.823487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.823521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.823716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.823752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.823952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.823987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.824130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.824166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.824386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.824422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.824645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.824680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.824836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.824871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.825114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.825266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.825303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.825542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.825726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.825760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.298 [2024-12-09 15:20:10.825969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.298 [2024-12-09 15:20:10.826005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.298 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.826241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.826280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.826465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.826501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.826765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.826800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.826955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.826989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.827184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.827226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.827453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.827488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.827688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.827723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.827915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.827950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.828149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.828185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.828401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.828436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.828629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.828670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.828859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.828894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.829006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.829040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.829320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.829357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.829567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.829603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.829755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.829790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.830113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.830327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.830362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.830558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.830594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.830986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.831020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.831162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.831197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.831356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.831391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.831519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.831553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.831751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.831978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.832013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.832170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.832205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.832440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.832477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.832636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.832670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.832865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.832899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.833952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.833988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.834122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.834158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.834294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.299 [2024-12-09 15:20:10.834332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.299 qpair failed and we were unable to recover it. 00:27:09.299 [2024-12-09 15:20:10.834531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.834756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.834790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.834996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.835030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.835154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.835188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.835398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.835435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.835657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.835880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.835916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.836078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.836203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.836246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.836392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.836426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.836541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.836577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.836770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.836806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.837904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.837938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.838119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.838351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.838388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.838598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.838635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.838758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.838792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.838984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.839147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.839325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.839555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.839898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.839934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.840077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.840111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.840242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.840279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.840421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.840457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.840596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.840629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.840743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.840778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.841035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.300 [2024-12-09 15:20:10.841236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.300 [2024-12-09 15:20:10.841273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.300 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.841402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.841436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.841562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.841595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.841798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.841834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.841951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.841987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.842257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.842295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.842558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.842591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.842710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.842746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.842865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.842900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.843118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.843154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.843352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.843388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.843527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.843562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.843827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.843862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.844049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.844085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.844304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.844342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.844470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.844505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.844701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.844735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.844893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.844927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.845961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.845996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.846187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.846228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.846360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.846396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.846523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.846557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.846704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.846740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.846854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.846889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.847075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.847110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.847241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.847275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.847456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.847491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.847698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.847735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.847847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.847880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.848011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.848044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.848163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.848340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.848373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.848491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.848524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.301 [2024-12-09 15:20:10.848718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.301 [2024-12-09 15:20:10.848753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.301 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.848951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.848986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.849168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.849203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.849335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.849369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.849566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.849603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.849717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.849751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.849891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.849923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.850964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.850998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.851182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.851215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.851453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.851486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.851687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.851721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.851864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.852001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.852037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.852308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.852491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.852524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.852652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.852695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.852903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.852938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.853133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.853168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.853306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.853339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.853529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.853562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.853682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.853715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.853852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.853885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.854165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.854200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.854366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.854400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.854600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.854632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.854759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.854792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.854924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.854958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.855104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.855137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.855261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.855297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.855480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.855515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.855626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.855659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.855783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.855816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.856002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.856036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.302 [2024-12-09 15:20:10.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.302 [2024-12-09 15:20:10.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.302 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.856417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.856457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.856665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.856699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.856953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.856987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.857174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.857207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.857424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.857458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.857710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.857743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.857868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.857900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.858057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.858091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.858237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.858273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.858421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.858456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.858693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.858727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.858868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.858901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.859158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.859192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.859398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.859638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.859672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.859838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.859872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.859987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.860021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.860267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.860303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.860454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.860487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.860626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.860659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.860792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.860826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.861040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.861079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.861267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.861498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.861531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.861659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.861692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.861896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.861931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.862041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.862076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.862183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.862225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.862425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.862459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.862594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.862628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.862774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.862808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.863070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.863104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.863312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.863348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.863591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.863627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.863895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.863929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.864131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.864167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.864324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.864360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.303 qpair failed and we were unable to recover it. 00:27:09.303 [2024-12-09 15:20:10.864496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.303 [2024-12-09 15:20:10.864530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.864655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.864689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.864888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.864922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.865057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.865091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.865231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.865265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.865401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.865436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.865703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.865738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.865851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.865886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.866017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.866050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.866244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.866280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.866460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.866608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.866670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.866984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.867012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.867288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.867314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.867504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.867529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.867645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.867667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.867844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.867869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.867989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.868137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.868376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.868573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.868739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.868853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.868875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.869045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.869068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.869249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.869273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.869392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.869416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.869632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.869656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.869760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.869783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.870879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.870903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.304 qpair failed and we were unable to recover it. 00:27:09.304 [2024-12-09 15:20:10.871032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.304 [2024-12-09 15:20:10.871055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.871917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.871941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.872060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.872084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.872282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.872427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.872451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.872549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.872573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.872734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.872757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.873054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.873078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.873334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.873360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.873491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.873515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.873725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.873749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.873932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.873956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.874151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.874312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.874336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.874520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.874544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.874703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.874728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.874976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.875001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.875198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.875227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.875396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.875421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.875625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.875648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.875881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.875905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.876963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.876991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.877252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.877276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.877408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.877683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.877707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.877994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.878018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.878208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.878242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.878407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.305 [2024-12-09 15:20:10.878430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.305 qpair failed and we were unable to recover it. 00:27:09.305 [2024-12-09 15:20:10.878563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.878588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.878685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.878707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.879952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.879975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.880082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.880106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.880344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.880369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.880487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.880511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.880711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.880735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.881964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.881988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.882079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.882101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.882269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.882351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.882616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.882666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.882866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.883157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.883192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.883469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.883504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.883750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.883783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.884123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.884325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.884361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.884519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.884553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.884695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.884728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.885068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.885103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.885323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.885359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.885543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.885577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.885733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.885984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.886018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.886241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.886277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.886505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.306 [2024-12-09 15:20:10.886540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.306 qpair failed and we were unable to recover it. 00:27:09.306 [2024-12-09 15:20:10.886739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.886773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.886926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.886960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.887191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.887235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.887446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.887480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.887684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.887718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.887857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.887891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.888112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.888146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.888348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.888385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.888524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.888559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.888790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.888824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.889134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.889163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.889367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.889392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.889575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.889743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.889768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.889975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.889999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.890174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.890198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.890379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.890405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.890545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.890570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.890792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.890818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.891066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.891328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.891454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.891641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.891797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.891985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.892010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.892207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.892238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.892409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.892436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.892645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.892845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.892869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.893084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.893109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.893350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.893608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.893805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.893829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.894029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.894054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.894174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.894198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.894418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.894441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.894621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.894646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.894812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.894836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.307 qpair failed and we were unable to recover it. 00:27:09.307 [2024-12-09 15:20:10.895101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.307 [2024-12-09 15:20:10.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.895398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.895425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.895667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.895690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.895955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.895979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.896240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.896277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.896514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.896548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.896702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.896737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.897033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.897067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.897279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.897305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.897438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.897462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.897668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.897959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.898336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.898537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.898573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.898811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.898837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.899100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.899142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.899370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.899404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.899546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.899581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.899927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.899962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.900248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.900284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.900430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.900465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.900746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.900780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.900974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.901008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.901215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.901273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.901504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.901528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.901751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.901990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.902024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.902279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.902320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.902533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.902568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.902701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.902734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.902940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.902974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.903193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.903237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.903542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.903575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.903767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.903801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.904082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.904115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.904384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.904409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.904676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.904863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.904887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.905067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.308 [2024-12-09 15:20:10.905091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.308 qpair failed and we were unable to recover it. 00:27:09.308 [2024-12-09 15:20:10.905345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.905381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.905539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.905573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.905766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.905791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.906054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.906078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.906260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.906296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.906432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.906657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.906690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.906878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.906903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.907154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.907180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.907474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.907499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.907692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.907717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.907823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.907846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.908105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.908361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.908396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.908697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.908731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.908951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.908990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.909231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.909267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.909398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.909433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.909644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.909678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.909976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.910011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.910158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.910193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.910428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.910648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.910672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.910876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.910901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.911093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.911118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.911307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.911332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.911461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.911485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.911596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.911620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.911883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.911918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.912226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.912252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.912495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.912530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.912836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.912871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.913178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.913212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.913441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.913476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.913752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.309 [2024-12-09 15:20:10.913775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.309 qpair failed and we were unable to recover it. 00:27:09.309 [2024-12-09 15:20:10.914047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.914083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.914273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.914310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.914506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.914541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.914842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.914868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.915127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.915151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.915362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.915553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.915577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.915799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.915832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.916152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.916187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.916447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.916472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.916658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.916682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.916811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.916835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.917009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.917033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.917238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.917274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.917413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.917448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.917659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.917706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.917974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.918017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.918151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.918185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.918415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.918452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.918736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.918770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.918993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.919028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.919315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.919357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.919486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.919520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.919727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.919761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.919990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.920025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.920237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.920273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.920480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.920514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.920701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.920726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.920972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.920996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.921192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.921229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.921420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.921721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.921917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.921942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.922130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.922163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.922306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.922623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.922656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.922924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.922958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.923248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.310 [2024-12-09 15:20:10.923286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.310 qpair failed and we were unable to recover it. 00:27:09.310 [2024-12-09 15:20:10.923493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.923517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.923774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.923808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.923995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.924030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.924236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.924272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.924467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.924491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.924658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.924691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.924994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.925027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.925298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.925334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.925493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.925527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.925733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.925991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.926223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.926248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.926510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.926534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.926669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.926904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.926939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.927150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.927185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.927470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.927495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.927770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.927795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.928076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.928100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.928334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.928630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.928826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.928860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.929081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.929115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.929324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.929350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.929617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.929642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.929873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.929897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.930060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.930085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.930365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.930634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.930840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.930873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.931142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.931176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.931489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.931514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.931685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.931709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.931889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.931924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.932054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.932088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.932297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.932332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.932548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.932582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.932885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.932925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.933183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.311 [2024-12-09 15:20:10.933243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.311 qpair failed and we were unable to recover it. 00:27:09.311 [2024-12-09 15:20:10.933364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.933399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.933582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.933607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.933801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.933825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.933920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.933943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.934199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.934232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.934474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.934498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.934677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.934702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.935105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.935139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.935331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.935356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.935531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.935563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.935704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.935738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.936007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.936041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.936320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.936356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.936644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.936678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.936950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.936984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.937277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.937312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.937556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.937727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.937762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.937984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.938018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.938302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.938337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.938486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.938520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.938725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.938760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.939070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.939104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.939312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.939347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.939605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.939639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.939848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.939873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.939977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.940002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.940168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.940193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.940459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.940701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.940725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.940899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.940924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.941013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.941035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.941206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.941246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.941458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.941481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.941718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.941753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.941940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.941974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.942280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.312 [2024-12-09 15:20:10.942315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.312 qpair failed and we were unable to recover it. 00:27:09.312 [2024-12-09 15:20:10.942577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.942601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.942795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.943036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.943071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.943185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.943231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.943521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.943554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.943847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.943882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.944189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.944240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.944412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.944436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.944704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.944933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.944967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.945233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.945270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.945535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.945569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.945849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.945874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.946035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.946060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.946316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.946352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.946654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.946678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.946947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.946971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.947213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.947245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.947413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.947437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.947615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.947649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.947905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.947940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.948241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.948275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.948563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.948598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.948872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.948907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.949199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.949259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.949536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.949569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.949756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.949780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.949945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.949979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.950303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.950595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.950639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.950824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.950858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.951046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.951080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.951361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.951396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.951522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.951556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.313 [2024-12-09 15:20:10.951708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.313 [2024-12-09 15:20:10.951733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.313 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.952011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.952045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.952162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.952196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.952416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.952450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.952766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.952800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.953098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.953133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.953268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.953303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.953580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.953605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.953804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.953978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.954237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.954580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.954615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.954762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.954787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.954909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.954934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.955122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.955145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.955422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.955447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.955687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.955712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.955837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.955861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.955969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.955993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.956103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.956127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.956327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.956362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.956595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.956640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.956829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.956865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.957056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.957090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.957278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.957313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.957429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.957464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.957693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.957718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.957901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.957925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.958109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.958135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.958322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.958347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.958604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.958638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.958898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.958932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.959192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.959240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.959556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.959590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.959883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.960072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.960096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.960371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.960396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.960675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.960721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.314 [2024-12-09 15:20:10.960954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.314 [2024-12-09 15:20:10.960988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.314 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.961193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.961237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.961522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.961558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.961829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.961853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.962035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.962059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.962244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.962270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.962402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.962426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.962676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.962700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.962889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.962923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.963072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.963105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.963327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.963363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.963591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.963895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.963928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.964232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.964267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.964537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.964570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.964691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.964725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.964997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.965021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.965262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.965299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.965522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.965547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.965798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.965822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.966082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.966107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.966393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.966418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.966668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.966692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.966855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.966879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.967138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.967163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.967325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.967351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.967611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.967645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.967947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.967981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.968232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.968268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.968481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.968522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.968719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.968744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.968980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.969195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.969401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.969601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.969774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.969928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.969963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.970149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.970183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.970466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.315 [2024-12-09 15:20:10.970631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.315 [2024-12-09 15:20:10.970665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.315 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.970941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.970965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.971149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.971174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.971417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.971443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.971627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.971652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.971946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.971980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.972213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.972259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.972471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.972506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.972830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.973072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.973096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.973271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.973297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.973407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.973431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.973608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.973637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.973819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.973853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.974372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.974408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.974600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.974624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.974730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.974755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.974920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.974945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.975066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.975100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.975360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.975395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.975701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.975736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.975934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.975968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.976188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.976232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.976433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.976467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.976661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.976695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.976888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.976923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.977200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.977248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.977453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.977487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.977682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.977716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.977855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.977879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.978098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.978122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.978392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.978418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.978654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.978678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.978861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.978887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.979119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.979145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.979341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.979365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.979489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.979513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.979702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.316 [2024-12-09 15:20:10.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.316 qpair failed and we were unable to recover it. 00:27:09.316 [2024-12-09 15:20:10.979922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.979961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.980180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.980215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.980507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.980541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.980830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.980864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.981058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.981092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.981292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.981327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.981604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.981639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.981865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.981900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.982108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.982142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.982371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.982579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.982613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.982811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.982836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.983030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.983054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.983298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.983324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.983617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.983651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.983936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.983971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.984256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.984291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.984577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.984611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.984840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.984981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.985006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.985278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.985314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.985458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.985673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.985698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.985857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.985882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.986184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.986248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.986475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.986511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.986841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.987075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.987104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.987289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.987316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.987538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.987817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.987851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.987999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.988033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.988311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.988346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.988481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.988515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.988720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.988836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.988858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.989108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.989132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.989352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.989377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.317 [2024-12-09 15:20:10.989495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.317 [2024-12-09 15:20:10.989520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.317 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.989775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.989800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.990005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.990029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.990204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.990236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.990473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.990497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.990754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.990778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.990994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.991181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.991205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.991531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.991707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.991731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.991908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.991932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.992114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.992148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.992353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.992389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.992647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.992964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.992999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.993278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.993313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.993515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.993540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.993714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.993748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.994060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.994319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.994354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.994568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.994746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.994780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.995064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.995099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.995374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.995410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.995697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.995732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.995854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.995887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.996141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.996175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.996381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.996418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.996675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.996710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.996928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.996952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.997213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.997248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.997507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.997531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.997717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.997741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.997879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.997904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.998036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.998062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.318 [2024-12-09 15:20:10.998162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.318 [2024-12-09 15:20:10.998186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.318 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.998366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.998391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.998667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.998702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.998965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.999000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.999134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.999168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.999486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.999522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.999724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:10.999759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:10.999977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.000011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.000290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.000327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.000611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.000646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.000836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.000869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.001134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.001169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.001439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.001475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.001761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.001996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.002031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.002299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.002334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.002491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.002516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.002710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.003007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.003196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.003228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.003404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.003429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.003607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.003630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.003816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.003856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.004134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.004170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.004472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.004506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.004701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.004735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.004931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.005234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.005270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.005473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.005506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.005629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.005662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.005845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.005879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.006160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.006195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.006345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.006379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.006606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.006631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.006906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.006950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.007242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.007278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.007487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.007523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.319 [2024-12-09 15:20:11.007760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.319 qpair failed and we were unable to recover it. 00:27:09.319 [2024-12-09 15:20:11.008032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.008056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.008237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.008261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.008463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.008488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.008924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.008958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.009147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.009182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.009392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.009427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.009565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.009600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.009877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.009910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.010132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.010167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.010363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.010399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.010581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.010621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.010879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.010914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.011097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.011132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.011415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.011450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.011564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.011598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.011821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.011855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.011997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.012031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.012308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.012343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.012629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.012664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.012894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.012928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.013206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.013251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.013528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.013563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.013847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.013881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.014083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.014118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.014365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.014400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.014655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.014688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.014886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.014920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.015192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.015237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.015351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.015384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.015610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.015644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.015914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.015939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.016170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.016194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.016469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.016494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.016757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.016781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.017039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.017072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.017317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.017596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.017630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.320 qpair failed and we were unable to recover it. 00:27:09.320 [2024-12-09 15:20:11.017760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.320 [2024-12-09 15:20:11.017796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.017991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.018015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.018202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.018248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.018525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.018560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.018746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.018781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.019057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.019092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.019366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.019401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.019688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.019722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.019907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.019950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.020138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.020163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.020331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.020356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.020635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.020660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.020836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.020861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.021138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.021162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.021426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.021452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.021737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.021929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.021954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.022226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.022262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.022422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.022458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.022641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.022674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.022816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.022851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.023108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.023143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.023435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.023470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.023752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.023788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.024006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.024040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.024271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.024587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.024621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.024803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.024828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.025095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.025130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.025337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.025372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.025659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.025684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.025798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.025822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.026107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.026309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.026334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.026595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.026874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.026899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.027058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.027083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.027249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.027274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.321 [2024-12-09 15:20:11.027438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.321 [2024-12-09 15:20:11.027462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.321 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.027699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.027725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.027919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.027943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.028064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.028093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.028364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.028390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.028576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.028818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.028842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.029018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.029042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.029204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.029238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.029500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.029524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.029630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.029656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.029917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.029952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.030177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.030213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.030495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.030529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.030757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.030782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.030986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.031009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.031185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.031515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.031549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.031843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.031878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.032139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.032173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.032474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.032509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.032772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.032805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.033039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.033073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.033262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.033297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.033522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.033557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.033697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.033733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.033871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.033905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.034161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.034195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.034409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.034443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.034704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.034738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.035032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.035296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.035331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.035533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.035764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.035788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.035965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.322 [2024-12-09 15:20:11.035989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.322 qpair failed and we were unable to recover it. 00:27:09.322 [2024-12-09 15:20:11.036284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.036428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.036451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.036545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.036569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.036727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.036752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.037014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.037048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.037320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.037356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.037662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.037687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.037863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.037888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.037983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.038005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.038293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.038319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.038479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.038504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.038781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.038828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.039033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.039066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.039251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.039478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.039513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.039806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.039831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.040013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.040038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.040202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.040425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.040653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.040679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.040857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.041085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.041120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.041405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.041446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.041716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.041750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.042008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.042043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.042275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.042312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.042522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.042556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.042783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.042816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.043004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.043037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.043342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.043376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.043644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.043687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.043982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.044016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.044295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.044330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.044605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.044640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.044927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.044962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.045589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.045625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.045750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.323 [2024-12-09 15:20:11.045775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.323 qpair failed and we were unable to recover it. 00:27:09.323 [2024-12-09 15:20:11.045887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.045911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.046144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.046168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.046405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.046430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.046662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.046686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.046849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.046873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.047050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.047085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.047268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.047303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.047446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.047479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.047692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.047726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.047941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.047983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.048250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.048464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.048488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.048658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.048940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.048965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.049157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.049182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.049369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.049395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1587295 Killed "${NVMF_APP[@]}" "$@" 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.049542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.049567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.049812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.049846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.050036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.050071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:09.324 [2024-12-09 15:20:11.050262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.050300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.050529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:09.324 [2024-12-09 15:20:11.050769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.050803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.324 [2024-12-09 15:20:11.050987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.051014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.051269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.051563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.051799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.051834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.052142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.052167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.052427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.052453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.052696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.052721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.052983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.053007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.053288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.053552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.053576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.053710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.053735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.054030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.054064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.054360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.054395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.054674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.324 [2024-12-09 15:20:11.054709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.324 qpair failed and we were unable to recover it. 00:27:09.324 [2024-12-09 15:20:11.054924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.054957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.055211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.055245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.055427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.055450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.055684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.055709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.055881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.055906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.056057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.056081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.056266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.056292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.056548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.056572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.056814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.056837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.057130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.057387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.057411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.057599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.057623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.057832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.058106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.058130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.058272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.058297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.058422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.058444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.058723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1588003 00:27:09.325 [2024-12-09 15:20:11.058748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.059034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.059060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1588003 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:09.325 [2024-12-09 15:20:11.059346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.059373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1588003 ']' 00:27:09.325 [2024-12-09 15:20:11.059617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.059643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.059751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.059776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.325 [2024-12-09 15:20:11.059992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.060020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.060127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.060150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.325 [2024-12-09 15:20:11.060339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.060365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.325 [2024-12-09 15:20:11.060504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 witWaiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.325 h addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.060711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.060735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.325 [2024-12-09 15:20:11.061077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 [2024-12-09 15:20:11.061105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.061284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.061309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.061424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.061447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.061732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.061757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.061849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.061871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.061992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.062014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.062186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.062210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.325 [2024-12-09 15:20:11.062470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.325 [2024-12-09 15:20:11.062497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.325 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.062706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.062731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.062855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.062879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.063132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.063157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.063420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.063446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.063572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.063596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.063827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.063851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.064029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.064057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.064264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.064309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.064501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.064527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.064720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.064949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.064974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.065241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.065269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.065460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.065487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.065677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.065702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.065820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.065845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.066028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.066057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.066315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.066344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.066530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.066555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.066739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.066764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.066943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.066968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.067064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.067086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.067257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.067282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.067484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.067509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.067697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.067722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.067939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.067964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.068237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.068267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.068435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.068459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.068621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.068645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.068776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.068803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.068920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.068947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.069129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.069153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.069265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.069291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.069503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.069683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.069708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.069844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.069870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.070130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.070154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.070332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.326 [2024-12-09 15:20:11.070357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.326 qpair failed and we were unable to recover it. 00:27:09.326 [2024-12-09 15:20:11.070568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.327 [2024-12-09 15:20:11.070591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.327 qpair failed and we were unable to recover it. 00:27:09.327 [2024-12-09 15:20:11.070858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.327 [2024-12-09 15:20:11.070883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.327 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.071966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.072176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.072201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.072426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.072618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.072642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.072770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.072794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.072961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.072987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.073232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.073258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.073445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.073472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.073714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.073918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.073943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.074141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.074170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.074441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.074466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.074599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.074624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.074760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.074784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.075016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.075239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.075265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.075393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.075418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.075676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.075885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.075910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.076021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.076045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.076161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.076187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.076421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.076449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.076636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.608 [2024-12-09 15:20:11.076661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.608 qpair failed and we were unable to recover it. 00:27:09.608 [2024-12-09 15:20:11.076797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.077026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.077052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.077286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.077311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.077575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.077600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.077761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.077786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.077931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.078138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.078162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.078325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.078356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.078617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.078641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.078818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.078843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.078945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.079178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.079202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.079448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.079473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.079647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.079671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.079839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.079864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.080851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.080875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.081003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.081028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.081209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.081245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.081433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.081457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.081622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.081646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.081928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.081953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.082144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.082168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.082277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.082299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.082481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.082506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.082669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.082693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.083014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.083189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.083213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.083360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.083385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.083559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.083583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.083788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.083813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.083994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.084018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.084304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.084329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.609 [2024-12-09 15:20:11.084536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.609 qpair failed and we were unable to recover it. 00:27:09.609 [2024-12-09 15:20:11.084650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.084675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.084909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.084934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.085111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.085135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.085390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.085416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.085672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.085846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.086961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.087130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.087154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.087338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.087364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.087624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.087649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.087767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.087791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.087970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.087994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.088911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.088936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.089910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.089934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.090930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.090953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.091135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.091159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.091319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.091345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.091474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.091498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.610 qpair failed and we were unable to recover it. 00:27:09.610 [2024-12-09 15:20:11.091616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.610 [2024-12-09 15:20:11.091640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.091824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.091848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.092946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.092970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.093081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.093103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.093280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.093305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.093507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.093532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.093711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.093861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.093884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.094888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.094912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.095106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.095132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.095296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.095321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.095556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.095582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.095767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.095791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.095887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.095910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.096889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.096911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.097028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.097237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.097263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.097364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.097389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.097573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.097775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.097800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.098033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.098304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.098329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.098510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.098534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.098698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.611 [2024-12-09 15:20:11.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.611 qpair failed and we were unable to recover it. 00:27:09.611 [2024-12-09 15:20:11.098812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.098836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.098942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.098964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.099068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.099092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.099273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.099297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.099473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.099498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.099667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.099691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.099947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.099971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.100060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.100256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.100280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.100521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.100544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.100818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.100843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.101821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.101982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.102174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.102413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.102530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.102735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.102883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.102907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.103943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.103967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.104155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.104179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.104364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.104388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.104488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.104629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.104652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.612 [2024-12-09 15:20:11.104752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.612 [2024-12-09 15:20:11.104774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.612 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.105035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.105059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.105249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.105274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.105381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.105405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.105612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.105636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.105827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.105851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.106905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.106928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.107161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.107185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.107309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.107333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.107506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.107529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.107641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.107666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.107927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.107950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.108157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.108184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.108428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.108452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.108628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.108652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.108911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.108935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.109139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.109257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.109396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.109596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.109723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.109989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.110816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.111858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.111978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.112003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.112108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.613 [2024-12-09 15:20:11.112131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.613 qpair failed and we were unable to recover it. 00:27:09.613 [2024-12-09 15:20:11.112298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.112442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.112466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.112563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.112586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.112712] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:27:09.614 [2024-12-09 15:20:11.112771] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.614 [2024-12-09 15:20:11.112786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.112927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.112949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.113203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.113236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.113333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.113356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.113527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.113550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.113670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.113693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.113877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.113902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.114179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.114399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.114425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.114638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.114832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.114857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.115066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.115093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.115303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.115328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.115529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.115557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.115804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.115831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.115976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.116167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.116438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.116463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.116573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.116598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.116721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.116747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.116868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.116894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.117016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.117040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.117309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.117337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.117503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.117546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.117726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.117751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.117918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.117941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.118152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.118383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.118411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.118574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.118597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.118697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.118718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.118842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.118868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.119023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.119047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.119248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.119272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.614 [2024-12-09 15:20:11.119521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.614 [2024-12-09 15:20:11.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.614 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.119748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.119771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.119968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.120149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.120172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.120444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.120469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.120724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.120747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.120861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.120885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.121048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.121073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.121277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.121302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.121430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.121453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.121685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.121710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.121969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.121992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.122948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.122973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.123934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.123959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.124118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.124143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.124335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.124510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.124533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.124639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.124662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.124838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.124862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.125121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.125264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.125463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.125674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.125798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.125980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.126004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.126257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.126335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.126501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.126539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.126733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.126767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.615 [2024-12-09 15:20:11.126908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.615 [2024-12-09 15:20:11.126942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.615 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.127100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.127134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.127330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.127369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.127516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.127551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.127760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.127795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.127999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.128034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.128236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.128263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.128440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.128483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.128713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.128737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.128909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.128932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.129106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.129130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.129337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.129509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.129533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.129643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.129667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.129895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.129920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.130076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.130100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.130190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.130212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.130407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.130432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.130533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.130557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.130838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.130861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.131046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.131070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.131181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.131206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.131334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.131358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.131587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.131610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.131813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.131854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.132134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.132171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.132375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.132409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.132602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.132638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.132767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.132802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.132997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.133445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.133644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.133776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.133968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.133991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.134119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.134142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.134317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.134343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.134432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.134457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.134557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.616 [2024-12-09 15:20:11.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.616 qpair failed and we were unable to recover it. 00:27:09.616 [2024-12-09 15:20:11.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.134764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.134868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.134975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.134997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.135154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.135178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.135306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.135331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.135467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.135694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.135718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.135912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.135936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.136112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.136135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.136317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.136342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.136527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.136551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.136733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.136758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.136957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.136996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.137197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.137243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.137447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.137482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.137619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.137653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.137838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.138073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.138106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.138338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.138445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.138468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.138746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.138982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.139897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.140943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.140967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.617 [2024-12-09 15:20:11.141246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.617 [2024-12-09 15:20:11.141271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.617 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.141518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.141729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.141753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.141889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.142079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.142103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.142265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.142303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.142488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.142512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.142689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.142869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.142893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.143959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.143982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.144072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.144096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.144185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.144206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.144458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.144713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.144737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.144898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.144922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.145935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.145958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.146887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.147083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.147111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.147233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.147257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.147370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.147393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.147582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.147788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.147811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.148010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.148062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.148226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.148250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.148366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.618 [2024-12-09 15:20:11.148389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.618 qpair failed and we were unable to recover it. 00:27:09.618 [2024-12-09 15:20:11.148554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.148578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.148845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.148869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.148959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.149912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.150180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.150203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.150408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.150432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.150602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.150625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.150721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.150745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.150833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.150857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.151044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.151067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.151260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.151284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.151466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.151489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.151676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.151699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.151802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.151825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.152890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.152914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.153899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.153921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.154953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.154975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.155160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.155182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.155314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.155339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.619 qpair failed and we were unable to recover it. 00:27:09.619 [2024-12-09 15:20:11.155498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.619 [2024-12-09 15:20:11.155522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.155722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.155745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.155975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.155997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.156174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.156196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.156374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.156398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.156555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.156578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.156675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.156698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.156868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.156892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.157091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.157195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.157225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.157319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.157343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.157582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.157884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.157907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.158845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.159919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.159951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.160831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.160853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.161078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.161318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.161342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.161445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.161468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.161642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.161664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.161831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.161855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.162028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.162051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.162142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.162164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.162386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.620 [2024-12-09 15:20:11.162492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.620 [2024-12-09 15:20:11.162514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.620 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.162668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.162690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.162854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.162877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.162961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.162983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.163096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.163119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.163377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.163401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.163505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.163528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.163616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.163638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.163743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.163767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.164029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.164065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.164292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.164327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.164436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.164469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.164648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.164681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.164883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.165975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.165997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.166184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.166436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.166460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.166626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.166811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.166833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.167111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.167135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.167304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.167329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.167520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.167630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.167654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.167903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.167926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.168182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.168374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.168510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.168729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.168875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.621 [2024-12-09 15:20:11.169002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.621 qpair failed and we were unable to recover it. 00:27:09.621 [2024-12-09 15:20:11.169169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.169192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.169333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.169359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.169537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.169559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.169731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.169754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.169914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.169938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.170096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.170118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.170238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.170489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.170512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.170756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.170778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.170929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.170952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.171144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.171358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.171547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.171676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.171868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.171987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.172177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.172457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.172636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.172827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.172960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.172982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.173919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.173953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.174198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.174228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.174337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.174367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.174535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.174558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.174786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.174809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.174921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.174943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.175136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.175240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.175264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.175430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.175452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.622 qpair failed and we were unable to recover it. 00:27:09.622 [2024-12-09 15:20:11.175614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.622 [2024-12-09 15:20:11.175638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.175878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.175901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.176864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.176887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.177133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.177156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.177329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.177353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.177525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.177548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.177743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.177766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.177962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.177986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.178179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.178375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.178510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.178620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.178811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.178982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.179821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.179977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.180000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.180101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.180124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.180290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.180313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.180420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.623 [2024-12-09 15:20:11.180443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.623 qpair failed and we were unable to recover it. 00:27:09.623 [2024-12-09 15:20:11.180601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.180624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.180715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.180739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.180960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.180982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.181107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.181328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.181351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.181461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.181483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.181664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.181687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.181778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.181800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.182020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.182042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.182330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.182354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.182515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.182537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.182762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.182784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.182886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.182909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.183101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.183123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.183249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.183273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.183381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.183403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.183582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.183606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.183835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.183857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.184031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.184054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.184207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.184240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.184477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.184499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.184611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.184635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.184853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.184877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.185804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.185827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.186071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.186095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.186313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.186551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.186575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.186818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.186842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.187068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.187140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.624 [2024-12-09 15:20:11.187379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.624 [2024-12-09 15:20:11.187451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.624 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.187691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.187775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.187973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.187999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.188229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.188253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.188473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.188496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.188657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.188680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.188940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.188964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.189134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.189157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.189323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.189347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.189505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.189528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.189617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.189638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.189896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.189918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.190125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.190330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.190582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.190761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.190893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.190999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.191514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.191758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.191941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.191964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.192950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.192973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.193228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.193251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.193371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.193394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.193560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.193583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.193693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.193716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.193960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.193983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.194145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.194169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.194280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.194305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.194524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.194547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.194719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.194742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.195012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.625 [2024-12-09 15:20:11.195035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.625 qpair failed and we were unable to recover it. 00:27:09.625 [2024-12-09 15:20:11.195131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.195167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.195368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.195393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.195577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.195601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.195824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.195847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.195997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.196239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.196442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.196690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.196814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.196943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.196968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.197941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.197964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.198148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.198286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.198383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.626 [2024-12-09 15:20:11.198414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.198437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.198586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.198609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.198776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.198799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.199889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.199913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.200930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.200953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.626 qpair failed and we were unable to recover it. 00:27:09.626 [2024-12-09 15:20:11.201108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.626 [2024-12-09 15:20:11.201130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.201372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.201396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.201558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.201581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.201831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.201853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.201977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.202854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.202969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.203828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.203992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.204882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.204906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.205863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.205884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.206973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.206997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.207166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.207193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.207312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.207336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.207558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.207581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.207763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.207788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.627 qpair failed and we were unable to recover it. 00:27:09.627 [2024-12-09 15:20:11.207955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.627 [2024-12-09 15:20:11.207977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.208086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.208118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.208281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.208305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.208401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.208424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.208601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.208626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.208851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.208875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.209867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.209891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.210964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.210989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.211232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.211256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.211411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.211434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.211534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.211556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.211653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.211687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.211874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.211898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.212120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.212155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.212380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.212405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.212557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.212580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.212734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.212757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.212851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.212874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.213834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.213857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.214053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.214238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.214429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.214674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.628 [2024-12-09 15:20:11.214858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.628 qpair failed and we were unable to recover it. 00:27:09.628 [2024-12-09 15:20:11.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.214984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.215137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.215160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.215268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.215292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.215516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.215539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.215650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.215672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.215760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.215781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.216953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.216980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.217130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.217153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.217332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.217360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.217545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.217567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.217719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.217742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.217920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.217944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.218169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.218191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.218304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.218329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.218571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.218594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.218710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.218733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.218885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.218908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.219062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.219085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.219322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.219346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.219527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.219550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.219692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.219741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.219875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.219909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.220104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.220138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.220318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.220352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.220466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.220687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.220719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.220908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.220934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.221954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.629 [2024-12-09 15:20:11.221981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.629 qpair failed and we were unable to recover it. 00:27:09.629 [2024-12-09 15:20:11.222135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.222158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.222318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.222342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.222490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.222514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.222677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.222700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.222796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.222819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.222990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.223013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.223204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.223234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.223485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.223508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.223598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.223619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.223884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.223907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.224013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.224035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.224231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.224256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.224410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.224693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.224730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.224916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.224949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.225938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.225962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.226921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.226944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.227970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.227993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.228183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.228206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.228320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.228343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.228438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.228460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.630 qpair failed and we were unable to recover it. 00:27:09.630 [2024-12-09 15:20:11.228622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.630 [2024-12-09 15:20:11.228645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.228811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.228834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.229056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.229079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.229322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.229360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.229488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.229522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.229646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.229677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.229873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.229912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.230108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.230142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.230432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.230540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.230576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.230739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.230762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.230870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.230897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.231142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.231306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.231330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.231422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.231446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.631 [2024-12-09 15:20:11.231702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.631 [2024-12-09 15:20:11.231726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.631 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.231896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.232813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.232836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.233916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.234126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.234251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.234290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.234535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.234567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.234765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.234990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.235023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.235199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.235240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.235489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.235522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.235724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.235761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.235894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.235927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.236120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.236157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.236370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.236402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.236515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.236537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.236634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.236658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.632 [2024-12-09 15:20:11.236812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.632 [2024-12-09 15:20:11.236836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.632 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.237833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.237858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 wit[2024-12-09 15:20:11.238676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.633 h addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.633 [2024-12-09 15:20:11.238715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.633 [2024-12-09 15:20:11.238722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.633 [2024-12-09 15:20:11.238727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.633 [2024-12-09 15:20:11.238773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.238894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.238922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.239113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.239136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.239290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.239312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.239392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.239414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.239596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.239620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.239838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.239861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.240024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.240047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.240213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.240205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:09.633 [2024-12-09 15:20:11.240398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.240290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:09.633 [2024-12-09 15:20:11.240422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 [2024-12-09 15:20:11.240396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.240397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:09.633 [2024-12-09 15:20:11.240595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.240617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.633 [2024-12-09 15:20:11.240716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.633 [2024-12-09 15:20:11.240737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.633 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.240845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.240871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.241932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.241955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.242896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.242987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.243200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.243387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.243575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.243697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.243959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.243983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.244874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.634 [2024-12-09 15:20:11.244897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.634 qpair failed and we were unable to recover it. 00:27:09.634 [2024-12-09 15:20:11.245063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.245086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.245244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.245275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.245479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.245502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.245692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.245720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.245836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.245859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.246061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.246085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.246308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.246334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.246497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.246520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.246738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.246761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.246954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.246978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.247133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.247156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.247271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.247295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.247487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.247511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.247693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.247716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.247803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.247824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.248006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.248048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.248298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.248332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.248548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.248594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.248709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.248742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.248874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.248907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.249972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.249996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.250095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.250118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.250214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-12-09 15:20:11.250247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.635 qpair failed and we were unable to recover it. 00:27:09.635 [2024-12-09 15:20:11.250467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.250490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.250647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.250670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.250782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.250806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.250904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.250928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.251944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.252097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.252120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.252348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.252372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.252485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.252507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.252727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.252750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.253001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.253043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.253197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.253241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.253510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.253547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.253758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.253797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.253982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.254015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.254178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.254415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.254447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.254622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.254647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.254803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.254830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.255079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.255103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.255276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.255301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.255398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.255421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.255535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.636 [2024-12-09 15:20:11.255559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.636 qpair failed and we were unable to recover it. 00:27:09.636 [2024-12-09 15:20:11.255708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.255732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.255880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.255904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.256905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.256929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.257946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.257970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.258084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.258193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.258395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.258579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.258826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.258981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.259904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.259929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.260115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.260140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.260298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.637 [2024-12-09 15:20:11.260323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.637 qpair failed and we were unable to recover it. 00:27:09.637 [2024-12-09 15:20:11.260416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.260440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.260559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.260583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.260725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.260945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.260970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.261175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.261198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.261382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.261406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.261557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.261580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.261821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.261844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.262085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.262109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.262270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.262295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.262480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.262504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.262725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.262898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.263084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.263113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.263313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.263338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.263572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.263595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.263778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.263919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.264033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.264056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.264403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.264426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.264570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.264594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.638 [2024-12-09 15:20:11.264819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.638 [2024-12-09 15:20:11.264844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.638 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.264935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.264959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.265102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.265124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.265350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.265375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.265618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.265643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.265889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.265912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.266154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.266433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.266458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.266605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.266804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.266828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.266949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.266974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.267129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.267152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.267338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.267362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.267602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.267626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.267838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.267861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.268045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.268068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.268177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.268200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.268387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.268411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.268665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.268694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.268921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.268944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.269095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.269119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.269288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.269313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.269576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.269600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.269843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.269868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.270107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.270130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.270301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.270325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.270494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.270517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.270733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.639 [2024-12-09 15:20:11.270757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.639 qpair failed and we were unable to recover it. 00:27:09.639 [2024-12-09 15:20:11.270926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.270950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.271114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.271137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.271329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.271467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.271491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.271725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.271750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.271928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.271951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.272058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.272083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.272252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.272307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.272546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.272569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.272816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.272839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.273081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.273106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.273256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.273281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.273476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.273499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.273737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.273762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.273931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.274207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.274239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.274437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.274460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.274705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.274848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.274871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.275068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.275091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.275317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.275341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.275528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.275552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.275770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.275793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.275962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.276077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.276098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.276259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.276284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.276479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.276503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.276745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.276768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.276996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.277019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.277172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.277196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.277523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.277582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.277801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.277856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.278141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.278175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.640 qpair failed and we were unable to recover it. 00:27:09.640 [2024-12-09 15:20:11.278462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.640 [2024-12-09 15:20:11.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.278715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.279011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.279245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.279271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.279501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.279523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.279764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.279787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.279896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.279917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.280078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.280100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.280209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.280240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.280400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.280423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.280636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.280659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.280826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.280850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.281894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.281989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.282010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.282208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.282479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.282503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.282745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.282768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.282921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.282943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.283184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.283207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.283450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.283473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.283688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.283726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.283999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.284032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.284334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.284369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.284542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.284575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.284856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.285096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.285129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.285414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.285601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.285633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.285833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.285867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.285993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.286020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.286304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.286573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.286596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.286719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.286742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.286933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.641 [2024-12-09 15:20:11.286956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.641 qpair failed and we were unable to recover it. 00:27:09.641 [2024-12-09 15:20:11.287182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.287205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.287446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.287470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.287720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.287744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.287970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.287994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.288260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.288446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.288470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.288816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.288839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.289081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.289105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.289213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.289253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.289424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.289446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.289602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.289625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.289776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.289800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.290008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.290047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.290313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.290349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.290647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.290680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.290941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.290967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.291143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.291168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.291397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.291422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.291605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.291629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.291794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.291817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.291997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.292250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.292276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.292443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.292467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.292569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.292592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.292769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.292792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.293034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.293059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.293234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.293261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.293486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.293511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.293703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.293916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.293940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.294224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.294250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.294435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.294459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.294731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.294758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.295028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.295226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.295369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.295626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.642 [2024-12-09 15:20:11.295755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.642 qpair failed and we were unable to recover it. 00:27:09.642 [2024-12-09 15:20:11.295999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.296282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.296415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.296626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.296763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.296946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.296971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.297136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.297160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.297414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.297439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.297683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.297707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.297928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.297952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.298169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.298193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.298313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.298337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.298523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.298546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.298705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.298728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.299017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.299039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.299311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.299336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.299501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.299524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.299741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.299764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.299919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.299942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.300093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.300115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.300273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.300297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.300514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.300537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.300775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.300797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.300973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.300996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.301250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.301481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.301504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.301738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.301760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.302007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.302029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.302195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.302248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.302513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.302536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.302638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.302661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.302832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.302855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.303110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.303133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.303412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.303435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.303619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.643 [2024-12-09 15:20:11.303642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.643 qpair failed and we were unable to recover it. 00:27:09.643 [2024-12-09 15:20:11.303852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.303874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.304065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.304088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.304347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.304577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.304599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.304747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.304919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.304941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.305112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.305134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.305381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.305518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.305735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.305757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.305911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.305933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.306020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.306041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.306202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.306231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.306401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.306423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.306608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.306631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.306880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.307815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.307838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.308058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.308081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.308242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.308265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.308373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.308393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.308577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.308852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.308874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.309025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.309048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.309283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.309307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.309551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.309574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.309819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.309841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.310088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.310329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.310353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.310469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.310490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.310710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.310892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.310914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.311110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.311133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.311230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.311252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.311405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.644 [2024-12-09 15:20:11.311427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.644 qpair failed and we were unable to recover it. 00:27:09.644 [2024-12-09 15:20:11.311671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.311694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.311929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.311953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.312104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.312127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.312320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.312344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.312552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.312575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.312677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.312698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.312914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.312936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.313099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.313122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.313342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.313365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.313555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.313743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.313765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.314007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.314030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.314236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.314259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.314522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.314544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.314788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.314811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.315974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.315997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.316240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.316264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.316370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.316391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.316630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.316653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.316746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.316767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.316925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.316947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.317118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.317141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.317292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.317316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.317419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.317440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.317605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.317627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.317814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.317836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.318887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.318910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.319108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.319131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.319280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.319304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.645 qpair failed and we were unable to recover it. 00:27:09.645 [2024-12-09 15:20:11.319456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.645 [2024-12-09 15:20:11.319479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.319696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.319718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.319889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.319912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.320138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.320162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.320316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.320340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.320505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.320529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.320613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.320636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.320786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.320809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.321872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.321895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.322137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.322160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.322258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.322281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.322450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.322473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.322641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.322664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.322876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.322899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.323077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.323100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.323285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.323309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.323400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.323421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.323531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.323553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.323753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.323776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.324001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.324024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.324264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.324288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.324441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.324464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.324638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.324661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.324811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.324834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.325101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.325124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.325364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.325388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.325551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.325574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.325850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.326025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.326049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.326323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.326346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.326522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.326546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.326786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.326809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.326995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.327239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.327263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.646 [2024-12-09 15:20:11.327480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.646 [2024-12-09 15:20:11.327503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.646 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.327724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.327747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.327965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.327988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.328250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.328274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.328546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.328699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.328722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.328970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.328993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.329158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.329182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.329283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.329305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.329467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.329496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.329711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.329735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.329908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.329932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.330209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.330239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.330350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.330372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.330527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.330549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.330717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.330740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.330908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.331988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.332100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.332225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.332352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.332542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.332790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.332812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.333054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.333250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.333536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.333796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.647 [2024-12-09 15:20:11.333978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.647 [2024-12-09 15:20:11.334001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.647 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.334151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.334174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.334268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.334290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.334402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.334599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.334622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.334879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.334956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.335166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.335203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.335399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.335433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.335694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.335729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.335837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.335871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.336083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.336264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.336397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.336586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.336996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.337018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.337117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.337140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.337367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.337390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.337651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.337862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.338973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.338995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.339214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.339245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.339399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.339422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.339571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.339594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.339811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.339834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.340072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.340108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.340344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.340408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.340671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.340736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.340939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.340976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.341247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.341285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.341557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.341590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.341864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.341898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.342094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.342127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.342308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.342343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.648 qpair failed and we were unable to recover it. 00:27:09.648 [2024-12-09 15:20:11.342519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.648 [2024-12-09 15:20:11.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.342817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.342851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.343091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.343125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.343368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.343404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.343674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.343708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.343942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.343975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.344247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.344291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.344509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.344542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.344753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.344786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.649 [2024-12-09 15:20:11.344991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.345026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.345308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.649 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.345541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.345566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.649 [2024-12-09 15:20:11.345804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.345829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.649 [2024-12-09 15:20:11.346053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.346077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.649 [2024-12-09 15:20:11.346323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.346349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.346591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.346615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.346766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.346789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.346973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.346996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.347210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.347427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.347452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.347615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.347638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.347786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.347808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.347923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.347947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.348213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.348417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.348439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.348696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.348886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.348909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.349068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.349095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.349284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.349308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.349550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.349574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.349743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.349766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.349947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.349973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.350146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.350172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.350413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.350438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.350599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.350622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.350781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.350804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.350980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.649 [2024-12-09 15:20:11.351288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.649 [2024-12-09 15:20:11.351312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.649 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.351486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.351509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.351685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.351708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.351822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.352889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.352913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.353920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.353942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.354161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.354184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.354358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.354381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.354550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.354573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.354731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.354755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.354898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.354921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.355022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.355050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.355277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.355302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.355412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.355435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.355613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.355636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.355831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.355854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.356941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.356962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.357880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.357903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.358013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.358036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.650 qpair failed and we were unable to recover it. 00:27:09.650 [2024-12-09 15:20:11.358285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.650 [2024-12-09 15:20:11.358310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.358484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.358510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.358668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.358691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.358864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.358888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.359937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.359960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.360149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.360173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.360310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.360333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.360530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.360724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.360747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.360914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.360937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.361897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.361985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.362007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.362207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.362245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.362753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.362777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.362930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.362952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.363899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.363922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.364885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.364995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.651 qpair failed and we were unable to recover it. 00:27:09.651 [2024-12-09 15:20:11.365180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.651 [2024-12-09 15:20:11.365203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.365332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.365355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.365466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.365489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.365601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.365625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.365809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.365833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.365943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.365967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.366887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.366910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.367971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.367994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.368929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.368952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.369951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.369974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.370137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.370160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.370266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.370290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.652 qpair failed and we were unable to recover it. 00:27:09.652 [2024-12-09 15:20:11.370378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.652 [2024-12-09 15:20:11.370400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.370541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.370565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.370669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.370696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.370854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.370877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.370983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.371931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.372962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.372986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.373910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.373933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.374936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.374960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.375941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.653 [2024-12-09 15:20:11.375965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.653 qpair failed and we were unable to recover it. 00:27:09.653 [2024-12-09 15:20:11.376080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.376801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.376977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.377885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.377909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.378905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.378929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.379877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.654 [2024-12-09 15:20:11.380047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.654 [2024-12-09 15:20:11.380071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.654 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.380295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.380320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.380534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.380700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.380723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.380833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.381001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.381024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.381132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.381155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.917 [2024-12-09 15:20:11.381259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.917 [2024-12-09 15:20:11.381283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.917 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.381381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.381404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.381512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.381618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.381641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.381794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.381817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.381912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.381935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.918 [2024-12-09 15:20:11.382571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.382895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.382998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.918 [2024-12-09 15:20:11.383305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.383902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.383986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.384908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.384930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.918 qpair failed and we were unable to recover it. 00:27:09.918 [2024-12-09 15:20:11.385820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.918 [2024-12-09 15:20:11.385843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.386171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.386315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.386479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.386627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.386833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.386867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.387976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.387999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.388958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.388981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.389894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.389917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.390855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.390877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.391053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.391077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.391183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.391206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.391325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.391347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.391432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.919 [2024-12-09 15:20:11.391455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.919 qpair failed and we were unable to recover it. 00:27:09.919 [2024-12-09 15:20:11.391545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.391568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.391651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.391675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.391779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.391802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.391907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.391931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.392952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.392978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.393924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.394869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.394978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.395996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.920 [2024-12-09 15:20:11.396773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.920 qpair failed and we were unable to recover it. 00:27:09.920 [2024-12-09 15:20:11.396858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.396882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.396994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.397888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.397912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.398908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.398931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.399962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.399984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.400954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.400977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.921 [2024-12-09 15:20:11.401956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.921 [2024-12-09 15:20:11.401979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.921 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.402146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.402169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.402352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.402436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.402459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.402626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.402649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.402878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.402901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.403050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.403074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.403235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.403260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.403517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.403541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.403701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.404966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.404988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.405963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.405985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.406898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.406921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.407007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.407029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.407119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.407142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.407350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.922 [2024-12-09 15:20:11.407373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.922 qpair failed and we were unable to recover it. 00:27:09.922 [2024-12-09 15:20:11.407460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.407482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.407600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.407623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.407731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.407754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.408019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.408056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.408252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.408286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.408482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.408516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.408726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.408760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.408891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.408923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.409113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.409146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.409332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.409367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.409563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.409595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.409808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.409846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.410100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.410134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.410309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.410343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.410553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.410585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.410777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.410810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.411112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.411353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.411388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.411570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.411603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.411838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.411868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.412029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.412053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.412216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.412248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.412494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.412518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.412673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.412698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.412803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.412826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.413064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.413335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.413361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.413538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.413562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.413753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.413776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.413965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.413990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.414149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.414175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.414442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.414469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.414701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.414954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.414977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.415143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.415167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.415348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.415373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.415478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.415502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.415659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.923 [2024-12-09 15:20:11.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.923 qpair failed and we were unable to recover it. 00:27:09.923 [2024-12-09 15:20:11.415869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.415893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.416909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.416932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.417102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.417125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.417283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.417481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.417505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.417656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.417681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.417896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.418055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.418077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.418306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.418330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.418449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.418472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.418655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.418678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.418871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.418894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.419112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.419355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.419380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.419563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 Malloc0 00:27:09.924 [2024-12-09 15:20:11.419739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.419762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.420097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.420120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.420285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.420309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.924 [2024-12-09 15:20:11.420476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.420500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.420654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.420676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:09.924 [2024-12-09 15:20:11.420911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.420934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.421099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.924 [2024-12-09 15:20:11.421282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.421413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.924 [2024-12-09 15:20:11.421593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.421782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.421949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.421972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.422135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.422157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.422246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.422268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.422498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.422520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.422690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.422712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.924 [2024-12-09 15:20:11.422938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.924 [2024-12-09 15:20:11.422961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.924 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.423205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.423236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.423471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.423493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.423715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.423738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.423982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.424170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.424431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.424573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.424745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.424879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.424903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.425947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.425970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.426133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.426156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.426374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.426397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.426584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.426607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.426907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.426929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.427173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.427197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.427278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.925 [2024-12-09 15:20:11.427426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.427450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.427564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.427587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.427811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.428044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.428067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.428254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.428278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.428457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.428627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.428650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.428907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.428931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.429215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.429353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.429566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.429786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.429809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.429911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.429934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.430040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.430063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.430269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.430337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9290000b90 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 A controller has encountered a failure and is being reset. 00:27:09.925 [2024-12-09 15:20:11.430649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9284000b90 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.925 [2024-12-09 15:20:11.431052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.925 [2024-12-09 15:20:11.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9288000b90 with addr=10.0.0.2, port=4420 00:27:09.925 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.431353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.431378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.431574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.431598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.431763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.431785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.431902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.431924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.432174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.432294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.432485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.432613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.926 [2024-12-09 15:20:11.432743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.432919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.432942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.433043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.926 [2024-12-09 15:20:11.433161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.433337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.926 [2024-12-09 15:20:11.433578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.433686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.926 [2024-12-09 15:20:11.433863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.433887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.434070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.434262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.434486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.434663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.434838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.434983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.435963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.435986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.436149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.436171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.436277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.436301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.436522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.436545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.436709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.436731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.436955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.436978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.437175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.437198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f85500 with addr=10.0.0.2, port=4420 00:27:09.926 qpair failed and we were unable to recover it. 00:27:09.926 [2024-12-09 15:20:11.437410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.926 [2024-12-09 15:20:11.437489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f93460 with addr=10.0.0.2, port=4420 00:27:09.926 [2024-12-09 15:20:11.437521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93460 is same with the state(6) to be set 00:27:09.926 [2024-12-09 15:20:11.437554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93460 (9): Bad file descriptor 00:27:09.926 [2024-12-09 15:20:11.437588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:09.926 [2024-12-09 15:20:11.437610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:09.926 [2024-12-09 15:20:11.437639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:09.926 Unable to reset the controller. 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.927 [2024-12-09 15:20:11.452304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.927 15:20:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1587322 00:27:10.858 Controller properly reset. 00:27:16.114 Initializing NVMe Controllers 00:27:16.114 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:16.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:16.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:16.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:16.114 Initialization complete. Launching workers. 00:27:16.114 Starting thread on core 1 00:27:16.114 Starting thread on core 2 00:27:16.114 Starting thread on core 3 00:27:16.114 Starting thread on core 0 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:16.114 00:27:16.114 real 0m10.704s 00:27:16.114 user 0m34.254s 00:27:16.114 sys 0m6.447s 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.114 ************************************ 00:27:16.114 END TEST nvmf_target_disconnect_tc2 00:27:16.114 ************************************ 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.114 rmmod nvme_tcp 00:27:16.114 rmmod nvme_fabrics 00:27:16.114 rmmod nvme_keyring 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1588003 ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1588003 ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588003' 00:27:16.114 killing process with pid 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1588003 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.114 15:20:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.020 15:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:18.020 00:27:18.020 real 0m19.455s 00:27:18.020 user 1m1.461s 00:27:18.020 sys 0m11.574s 00:27:18.020 15:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.020 15:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:18.020 ************************************ 00:27:18.020 END TEST nvmf_target_disconnect 00:27:18.020 ************************************ 00:27:18.278 15:20:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:18.278 00:27:18.278 real 5m50.176s 00:27:18.278 user 10m41.817s 00:27:18.278 sys 2m0.108s 00:27:18.278 15:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.278 15:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.278 ************************************ 00:27:18.278 END TEST nvmf_host 00:27:18.278 ************************************ 00:27:18.278 15:20:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:18.278 15:20:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:18.278 15:20:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:18.278 15:20:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:18.278 15:20:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.278 15:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.278 ************************************ 00:27:18.278 START TEST nvmf_target_core_interrupt_mode 00:27:18.278 ************************************ 00:27:18.279 15:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:18.279 * Looking for test storage... 00:27:18.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:18.279 15:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.279 15:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.279 15:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.279 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.538 --rc genhtml_branch_coverage=1 00:27:18.538 --rc genhtml_function_coverage=1 00:27:18.538 --rc genhtml_legend=1 00:27:18.538 --rc geninfo_all_blocks=1 00:27:18.538 --rc geninfo_unexecuted_blocks=1 00:27:18.538 00:27:18.538 ' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.538 --rc genhtml_branch_coverage=1 00:27:18.538 --rc genhtml_function_coverage=1 00:27:18.538 --rc genhtml_legend=1 00:27:18.538 --rc geninfo_all_blocks=1 00:27:18.538 --rc geninfo_unexecuted_blocks=1 00:27:18.538 00:27:18.538 ' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.538 --rc genhtml_branch_coverage=1 00:27:18.538 --rc genhtml_function_coverage=1 00:27:18.538 --rc genhtml_legend=1 00:27:18.538 --rc geninfo_all_blocks=1 00:27:18.538 --rc geninfo_unexecuted_blocks=1 00:27:18.538 00:27:18.538 ' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.538 --rc genhtml_branch_coverage=1 00:27:18.538 --rc genhtml_function_coverage=1 00:27:18.538 --rc genhtml_legend=1 00:27:18.538 --rc geninfo_all_blocks=1 00:27:18.538 --rc geninfo_unexecuted_blocks=1 00:27:18.538 00:27:18.538 ' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:18.538 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:18.539 ************************************ 00:27:18.539 START TEST nvmf_abort 00:27:18.539 ************************************ 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:18.539 * Looking for test storage... 00:27:18.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.539 --rc genhtml_branch_coverage=1 00:27:18.539 --rc genhtml_function_coverage=1 00:27:18.539 --rc genhtml_legend=1 00:27:18.539 --rc geninfo_all_blocks=1 00:27:18.539 --rc geninfo_unexecuted_blocks=1 00:27:18.539 00:27:18.539 ' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.539 --rc genhtml_branch_coverage=1 00:27:18.539 --rc genhtml_function_coverage=1 00:27:18.539 --rc genhtml_legend=1 00:27:18.539 --rc geninfo_all_blocks=1 00:27:18.539 --rc geninfo_unexecuted_blocks=1 00:27:18.539 00:27:18.539 ' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.539 --rc genhtml_branch_coverage=1 00:27:18.539 --rc genhtml_function_coverage=1 00:27:18.539 --rc genhtml_legend=1 00:27:18.539 --rc geninfo_all_blocks=1 00:27:18.539 --rc geninfo_unexecuted_blocks=1 00:27:18.539 00:27:18.539 ' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.539 --rc genhtml_branch_coverage=1 00:27:18.539 --rc genhtml_function_coverage=1 00:27:18.539 --rc genhtml_legend=1 00:27:18.539 --rc geninfo_all_blocks=1 00:27:18.539 --rc geninfo_unexecuted_blocks=1 00:27:18.539 00:27:18.539 ' 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.539 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.798 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.799 15:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.369 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.369 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.369 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.369 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.369 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.370 15:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:27:25.370 00:27:25.370 --- 10.0.0.2 ping statistics --- 00:27:25.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.370 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:27:25.370 00:27:25.370 --- 10.0.0.1 ping statistics --- 00:27:25.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.370 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1592505 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1592505 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1592505 ']' 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 [2024-12-09 15:20:26.353875] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:25.370 [2024-12-09 15:20:26.354774] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:27:25.370 [2024-12-09 15:20:26.354807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.370 [2024-12-09 15:20:26.432950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.370 [2024-12-09 15:20:26.472589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.370 [2024-12-09 15:20:26.472630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.370 [2024-12-09 15:20:26.472640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.370 [2024-12-09 15:20:26.472649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.370 [2024-12-09 15:20:26.472657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.370 [2024-12-09 15:20:26.474132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.370 [2024-12-09 15:20:26.474257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.370 [2024-12-09 15:20:26.474257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.370 [2024-12-09 15:20:26.540836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:25.370 [2024-12-09 15:20:26.541523] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:25.370 [2024-12-09 15:20:26.541652] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:25.370 [2024-12-09 15:20:26.541808] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 [2024-12-09 15:20:26.611052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 Malloc0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 Delay0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 [2024-12-09 15:20:26.698983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.370 15:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:25.371 [2024-12-09 15:20:26.867367] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:27.274 Initializing NVMe Controllers 00:27:27.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:27.274 controller IO queue size 128 less than required 00:27:27.274 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:27.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:27.274 Initialization complete. Launching workers. 00:27:27.274 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37934 00:27:27.274 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37991, failed to submit 66 00:27:27.274 success 37934, unsuccessful 57, failed 0 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:27.274 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:27.275 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:27.275 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:27.275 rmmod nvme_tcp 00:27:27.275 rmmod nvme_fabrics 00:27:27.275 rmmod nvme_keyring 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1592505 ']' 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1592505 ']' 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592505' 00:27:27.534 killing process with pid 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1592505 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:27.534 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:27.793 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:27.793 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:27.793 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.793 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.793 15:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.700 00:27:29.700 real 0m11.249s 00:27:29.700 user 0m10.811s 00:27:29.700 sys 0m5.661s 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.700 ************************************ 00:27:29.700 END TEST nvmf_abort 00:27:29.700 ************************************ 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:29.700 ************************************ 00:27:29.700 START TEST nvmf_ns_hotplug_stress 00:27:29.700 ************************************ 00:27:29.700 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:29.961 * Looking for test storage... 00:27:29.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.961 --rc genhtml_branch_coverage=1 00:27:29.961 --rc genhtml_function_coverage=1 00:27:29.961 --rc genhtml_legend=1 00:27:29.961 --rc geninfo_all_blocks=1 00:27:29.961 --rc geninfo_unexecuted_blocks=1 00:27:29.961 00:27:29.961 ' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.961 --rc genhtml_branch_coverage=1 00:27:29.961 --rc genhtml_function_coverage=1 00:27:29.961 --rc genhtml_legend=1 00:27:29.961 --rc geninfo_all_blocks=1 00:27:29.961 --rc geninfo_unexecuted_blocks=1 00:27:29.961 00:27:29.961 ' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.961 --rc genhtml_branch_coverage=1 00:27:29.961 --rc genhtml_function_coverage=1 00:27:29.961 --rc genhtml_legend=1 00:27:29.961 --rc geninfo_all_blocks=1 00:27:29.961 --rc geninfo_unexecuted_blocks=1 00:27:29.961 00:27:29.961 ' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:29.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.961 --rc genhtml_branch_coverage=1 00:27:29.961 --rc genhtml_function_coverage=1 00:27:29.961 --rc genhtml_legend=1 00:27:29.961 --rc geninfo_all_blocks=1 00:27:29.961 --rc geninfo_unexecuted_blocks=1 00:27:29.961 00:27:29.961 ' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.961 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.962 15:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:36.547 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:36.547 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:36.547 Found net devices under 0000:af:00.0: cvl_0_0 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.547 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:36.548 Found net devices under 0000:af:00.1: cvl_0_1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:36.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:27:36.548 00:27:36.548 --- 10.0.0.2 ping statistics --- 00:27:36.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.548 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:36.548 00:27:36.548 --- 10.0.0.1 ping statistics --- 00:27:36.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.548 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1596451 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1596451 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1596451 ']' 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 [2024-12-09 15:20:37.598598] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:36.548 [2024-12-09 15:20:37.599489] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:27:36.548 [2024-12-09 15:20:37.599522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.548 [2024-12-09 15:20:37.674213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:36.548 [2024-12-09 15:20:37.714060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.548 [2024-12-09 15:20:37.714095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.548 [2024-12-09 15:20:37.714101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.548 [2024-12-09 15:20:37.714107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.548 [2024-12-09 15:20:37.714112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.548 [2024-12-09 15:20:37.715409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.548 [2024-12-09 15:20:37.715513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.548 [2024-12-09 15:20:37.715514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.548 [2024-12-09 15:20:37.781982] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:36.548 [2024-12-09 15:20:37.782669] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:36.548 [2024-12-09 15:20:37.782874] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:36.548 [2024-12-09 15:20:37.782997] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:36.548 15:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:36.548 [2024-12-09 15:20:38.016280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.548 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:36.549 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.817 [2024-12-09 15:20:38.404590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.817 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:37.120 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:37.120 Malloc0 00:27:37.120 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:37.378 Delay0 00:27:37.378 15:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.648 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:37.648 NULL1 00:27:37.648 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:37.906 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1596897 00:27:37.906 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:37.906 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:37.906 15:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.279 Read completed with error (sct=0, sc=11) 00:27:39.279 15:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:39.279 15:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:39.279 15:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:39.537 true 00:27:39.537 15:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:39.537 15:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.469 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.469 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:40.469 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:40.727 true 00:27:40.727 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:40.727 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.985 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.243 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:41.243 15:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:41.243 true 00:27:41.243 15:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:41.243 15:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.616 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.616 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:42.616 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:42.874 true 00:27:42.874 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:42.874 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.874 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.131 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:43.131 15:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:43.389 true 00:27:43.389 15:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:43.389 15:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 15:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.766 15:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:44.766 15:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:45.025 true 00:27:45.025 15:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:45.025 15:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.956 15:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.957 15:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:45.957 15:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:46.214 true 00:27:46.214 15:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:46.214 15:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.479 15:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.479 15:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:46.479 15:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:46.739 true 00:27:46.739 15:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:46.739 15:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.672 15:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.930 15:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:47.930 15:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:48.188 true 00:27:48.188 15:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:48.188 15:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.446 15:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.704 15:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:48.704 15:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:48.704 true 00:27:48.704 15:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:48.704 15:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.076 15:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.076 15:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:50.076 15:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:50.076 true 00:27:50.076 15:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:50.076 15:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.334 15:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.592 15:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:50.592 15:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:50.849 true 00:27:50.849 15:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:50.849 15:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:51.782 15:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.039 15:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:52.039 15:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:52.297 true 00:27:52.297 15:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:52.297 15:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.297 15:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.555 15:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:52.555 15:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:52.813 true 00:27:52.813 15:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:52.813 15:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.744 15:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.001 15:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:54.001 15:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:54.258 true 00:27:54.258 15:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:54.258 15:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.516 15:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.516 15:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:54.516 15:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:54.774 true 00:27:54.774 15:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:54.774 15:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 15:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.965 15:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:55.965 15:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:56.222 true 00:27:56.222 15:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:56.222 15:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.274 15:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.274 15:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:57.274 15:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:57.532 true 00:27:57.532 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:57.532 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.791 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.791 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:57.791 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:58.048 true 00:27:58.048 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:58.048 15:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:58.981 15:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.239 15:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:59.239 15:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:59.497 true 00:27:59.497 15:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:27:59.497 15:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.430 15:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.430 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:00.430 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:00.687 true 00:28:00.687 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:00.687 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.944 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.202 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:01.202 15:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:01.202 true 00:28:01.460 15:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:01.460 15:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:02.392 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.650 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:02.650 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:02.650 true 00:28:02.650 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:02.650 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.907 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.165 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:03.165 15:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:03.427 true 00:28:03.427 15:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:03.427 15:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.360 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.617 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:04.617 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:04.617 true 00:28:04.617 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:04.617 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.874 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.131 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:05.131 15:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:05.389 true 00:28:05.389 15:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:05.389 15:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.322 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.580 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:06.580 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:06.837 true 00:28:06.837 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:06.837 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.095 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.095 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:07.095 15:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:07.353 true 00:28:07.353 15:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:07.353 15:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.725 Initializing NVMe Controllers 00:28:08.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.725 Controller IO queue size 128, less than required. 00:28:08.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.725 Controller IO queue size 128, less than required. 00:28:08.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:08.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:08.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:08.725 Initialization complete. Launching workers. 00:28:08.725 ======================================================== 00:28:08.725 Latency(us) 00:28:08.725 Device Information : IOPS MiB/s Average min max 00:28:08.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1124.63 0.55 70986.79 2828.18 1039453.13 00:28:08.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16556.97 8.08 7730.58 1824.34 368393.03 00:28:08.725 ======================================================== 00:28:08.725 Total : 17681.60 8.63 11753.97 1824.34 1039453.13 00:28:08.725 00:28:08.725 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.725 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:08.725 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:08.982 true 00:28:08.982 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1596897 00:28:08.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1596897) - No such process 00:28:08.982 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1596897 00:28:08.982 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.240 15:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:09.497 null0 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.497 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:09.755 null1 00:28:09.755 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:09.755 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:09.755 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:10.012 null2 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:10.012 null3 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.012 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:10.269 null4 00:28:10.269 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.269 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.269 15:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:10.527 null5 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:10.527 null6 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.527 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:10.784 null7 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1602720 1602722 1602723 1602725 1602727 1602729 1602731 1602733 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:10.785 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.043 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.302 15:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.302 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.560 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:11.561 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:11.818 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:11.818 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:11.818 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.818 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:11.819 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:11.819 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:11.819 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.819 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.076 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.077 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.335 15:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.335 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.593 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:12.594 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:12.594 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:12.594 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:12.594 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:12.594 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:12.851 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.851 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.851 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:12.851 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.851 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:12.852 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.110 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.369 15:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.369 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.369 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.369 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:13.627 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:13.885 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:13.885 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:13.885 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:13.885 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:13.886 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:13.886 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:13.886 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:13.886 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.144 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.403 15:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.403 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.661 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.920 rmmod nvme_tcp 00:28:14.920 rmmod nvme_fabrics 00:28:14.920 rmmod nvme_keyring 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1596451 ']' 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1596451 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1596451 ']' 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1596451 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:14.920 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596451 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596451' 00:28:15.179 killing process with pid 1596451 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1596451 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1596451 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.179 15:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.713 15:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.713 00:28:17.713 real 0m47.535s 00:28:17.713 user 2m57.906s 00:28:17.713 sys 0m19.029s 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:17.713 ************************************ 00:28:17.713 END TEST nvmf_ns_hotplug_stress 00:28:17.713 ************************************ 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:17.713 ************************************ 00:28:17.713 START TEST nvmf_delete_subsystem 00:28:17.713 ************************************ 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:17.713 * Looking for test storage... 00:28:17.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.713 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.714 --rc genhtml_branch_coverage=1 00:28:17.714 --rc genhtml_function_coverage=1 00:28:17.714 --rc genhtml_legend=1 00:28:17.714 --rc geninfo_all_blocks=1 00:28:17.714 --rc geninfo_unexecuted_blocks=1 00:28:17.714 00:28:17.714 ' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.714 --rc genhtml_branch_coverage=1 00:28:17.714 --rc genhtml_function_coverage=1 00:28:17.714 --rc genhtml_legend=1 00:28:17.714 --rc geninfo_all_blocks=1 00:28:17.714 --rc geninfo_unexecuted_blocks=1 00:28:17.714 00:28:17.714 ' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.714 --rc genhtml_branch_coverage=1 00:28:17.714 --rc genhtml_function_coverage=1 00:28:17.714 --rc genhtml_legend=1 00:28:17.714 --rc geninfo_all_blocks=1 00:28:17.714 --rc geninfo_unexecuted_blocks=1 00:28:17.714 00:28:17.714 ' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.714 --rc genhtml_branch_coverage=1 00:28:17.714 --rc genhtml_function_coverage=1 00:28:17.714 --rc genhtml_legend=1 00:28:17.714 --rc geninfo_all_blocks=1 00:28:17.714 --rc geninfo_unexecuted_blocks=1 00:28:17.714 00:28:17.714 ' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.714 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.715 15:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:24.280 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:24.280 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:24.280 Found net devices under 0000:af:00.0: cvl_0_0 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.280 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:24.281 Found net devices under 0000:af:00.1: cvl_0_1 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.281 15:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:28:24.281 00:28:24.281 --- 10.0.0.2 ping statistics --- 00:28:24.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.281 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:28:24.281 00:28:24.281 --- 10.0.0.1 ping statistics --- 00:28:24.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.281 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1607046 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1607046 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1607046 ']' 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 [2024-12-09 15:21:25.335198] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:24.281 [2024-12-09 15:21:25.336096] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:24.281 [2024-12-09 15:21:25.336128] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.281 [2024-12-09 15:21:25.413701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.281 [2024-12-09 15:21:25.453098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.281 [2024-12-09 15:21:25.453135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.281 [2024-12-09 15:21:25.453142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.281 [2024-12-09 15:21:25.453149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.281 [2024-12-09 15:21:25.453155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.281 [2024-12-09 15:21:25.454302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.281 [2024-12-09 15:21:25.454302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.281 [2024-12-09 15:21:25.521533] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:24.281 [2024-12-09 15:21:25.522054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:24.281 [2024-12-09 15:21:25.522267] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 [2024-12-09 15:21:25.586970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.281 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 [2024-12-09 15:21:25.615294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 NULL1 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 Delay0 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1607073 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:24.282 15:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:24.282 [2024-12-09 15:21:25.726087] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:26.181 15:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.181 15:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.181 15:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 [2024-12-09 15:21:27.815581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53a780 is same with the state(6) to be set 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 starting I/O failed: -6 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Write completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.181 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 starting I/O failed: -6 00:28:26.182 [2024-12-09 15:21:27.816093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61d0000c40 is same with the state(6) to be set 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Write completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:26.182 Read completed with error (sct=0, sc=8) 00:28:27.116 [2024-12-09 15:21:28.780703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53b9b0 is same with the state(6) to be set 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 [2024-12-09 15:21:28.816943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53ab40 is same with the state(6) to be set 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.116 Write completed with error (sct=0, sc=8) 00:28:27.116 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 [2024-12-09 15:21:28.817190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53a2c0 is same with the state(6) to be set 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 [2024-12-09 15:21:28.817926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61d000d020 is same with the state(6) to be set 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Write completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 Read completed with error (sct=0, sc=8) 00:28:27.117 [2024-12-09 15:21:28.818461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f61d000d680 is same with the state(6) to be set 00:28:27.117 Initializing NVMe Controllers 00:28:27.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.117 Controller IO queue size 128, less than required. 00:28:27.117 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:27.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:27.117 Initialization complete. Launching workers. 00:28:27.117 ======================================================== 00:28:27.117 Latency(us) 00:28:27.117 Device Information : IOPS MiB/s Average min max 00:28:27.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.80 0.08 951200.13 430.23 2001406.08 00:28:27.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.82 0.08 953410.75 266.35 2002111.87 00:28:27.117 ======================================================== 00:28:27.117 Total : 330.62 0.16 952295.48 266.35 2002111.87 00:28:27.117 00:28:27.117 [2024-12-09 15:21:28.819064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x53b9b0 (9): Bad file descriptor 00:28:27.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:27.117 15:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.117 15:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:27.117 15:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1607073 00:28:27.117 15:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1607073 00:28:27.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1607073) - No such process 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1607073 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1607073 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1607073 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.684 [2024-12-09 15:21:29.347258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1607681 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:27.684 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:27.684 [2024-12-09 15:21:29.432305] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:28.251 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.251 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:28.251 15:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.815 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:28.815 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:28.815 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.382 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.382 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:29.382 15:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.640 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.640 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:29.640 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.207 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.207 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:30.207 15:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.773 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.773 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:30.773 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:31.031 Initializing NVMe Controllers 00:28:31.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.031 Controller IO queue size 128, less than required. 00:28:31.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:31.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:31.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:31.031 Initialization complete. Launching workers. 00:28:31.031 ======================================================== 00:28:31.031 Latency(us) 00:28:31.031 Device Information : IOPS MiB/s Average min max 00:28:31.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002330.09 1000120.86 1041225.39 00:28:31.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004724.37 1000257.74 1042180.58 00:28:31.031 ======================================================== 00:28:31.031 Total : 256.00 0.12 1003527.23 1000120.86 1042180.58 00:28:31.031 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1607681 00:28:31.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1607681) - No such process 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1607681 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.289 rmmod nvme_tcp 00:28:31.289 rmmod nvme_fabrics 00:28:31.289 rmmod nvme_keyring 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1607046 ']' 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1607046 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1607046 ']' 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1607046 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.289 15:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607046 00:28:31.289 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.289 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.289 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607046' 00:28:31.289 killing process with pid 1607046 00:28:31.289 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1607046 00:28:31.289 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1607046 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.547 15:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.455 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.718 00:28:33.718 real 0m16.182s 00:28:33.718 user 0m26.116s 00:28:33.718 sys 0m6.050s 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.718 ************************************ 00:28:33.718 END TEST nvmf_delete_subsystem 00:28:33.718 ************************************ 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:33.718 ************************************ 00:28:33.718 START TEST nvmf_host_management 00:28:33.718 ************************************ 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:33.718 * Looking for test storage... 00:28:33.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.718 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:33.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.718 --rc genhtml_branch_coverage=1 00:28:33.719 --rc genhtml_function_coverage=1 00:28:33.719 --rc genhtml_legend=1 00:28:33.719 --rc geninfo_all_blocks=1 00:28:33.719 --rc geninfo_unexecuted_blocks=1 00:28:33.719 00:28:33.719 ' 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:33.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.719 --rc genhtml_branch_coverage=1 00:28:33.719 --rc genhtml_function_coverage=1 00:28:33.719 --rc genhtml_legend=1 00:28:33.719 --rc geninfo_all_blocks=1 00:28:33.719 --rc geninfo_unexecuted_blocks=1 00:28:33.719 00:28:33.719 ' 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:33.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.719 --rc genhtml_branch_coverage=1 00:28:33.719 --rc genhtml_function_coverage=1 00:28:33.719 --rc genhtml_legend=1 00:28:33.719 --rc geninfo_all_blocks=1 00:28:33.719 --rc geninfo_unexecuted_blocks=1 00:28:33.719 00:28:33.719 ' 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:33.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.719 --rc genhtml_branch_coverage=1 00:28:33.719 --rc genhtml_function_coverage=1 00:28:33.719 --rc genhtml_legend=1 00:28:33.719 --rc geninfo_all_blocks=1 00:28:33.719 --rc geninfo_unexecuted_blocks=1 00:28:33.719 00:28:33.719 ' 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.719 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.976 15:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.538 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:40.539 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:40.539 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:40.539 Found net devices under 0000:af:00.0: cvl_0_0 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:40.539 Found net devices under 0000:af:00.1: cvl_0_1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:28:40.539 00:28:40.539 --- 10.0.0.2 ping statistics --- 00:28:40.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.539 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:28:40.539 00:28:40.539 --- 10.0.0.1 ping statistics --- 00:28:40.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.539 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1611696 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1611696 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1611696 ']' 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.539 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.539 [2024-12-09 15:21:41.480371] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:40.540 [2024-12-09 15:21:41.481351] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:40.540 [2024-12-09 15:21:41.481393] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.540 [2024-12-09 15:21:41.560019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.540 [2024-12-09 15:21:41.601989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.540 [2024-12-09 15:21:41.602025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.540 [2024-12-09 15:21:41.602032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.540 [2024-12-09 15:21:41.602037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.540 [2024-12-09 15:21:41.602042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.540 [2024-12-09 15:21:41.606234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.540 [2024-12-09 15:21:41.606323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.540 [2024-12-09 15:21:41.606606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.540 [2024-12-09 15:21:41.606606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.540 [2024-12-09 15:21:41.674138] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:40.540 [2024-12-09 15:21:41.674619] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:40.540 [2024-12-09 15:21:41.674669] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:40.540 [2024-12-09 15:21:41.675125] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:40.540 [2024-12-09 15:21:41.675131] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 [2024-12-09 15:21:41.751300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 Malloc0 00:28:40.540 [2024-12-09 15:21:41.847566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1611742 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1611742 /var/tmp/bdevperf.sock 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1611742 ']' 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.540 { 00:28:40.540 "params": { 00:28:40.540 "name": "Nvme$subsystem", 00:28:40.540 "trtype": "$TEST_TRANSPORT", 00:28:40.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.540 "adrfam": "ipv4", 00:28:40.540 "trsvcid": "$NVMF_PORT", 00:28:40.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.540 "hdgst": ${hdgst:-false}, 00:28:40.540 "ddgst": ${ddgst:-false} 00:28:40.540 }, 00:28:40.540 "method": "bdev_nvme_attach_controller" 00:28:40.540 } 00:28:40.540 EOF 00:28:40.540 )") 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:40.540 15:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.540 "params": { 00:28:40.540 "name": "Nvme0", 00:28:40.540 "trtype": "tcp", 00:28:40.540 "traddr": "10.0.0.2", 00:28:40.540 "adrfam": "ipv4", 00:28:40.540 "trsvcid": "4420", 00:28:40.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:40.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:40.540 "hdgst": false, 00:28:40.540 "ddgst": false 00:28:40.540 }, 00:28:40.540 "method": "bdev_nvme_attach_controller" 00:28:40.540 }' 00:28:40.540 [2024-12-09 15:21:41.942836] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:40.540 [2024-12-09 15:21:41.942884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611742 ] 00:28:40.540 [2024-12-09 15:21:42.016345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.540 [2024-12-09 15:21:42.056123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.540 Running I/O for 10 seconds... 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=86 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 86 -ge 100 ']' 00:28:40.799 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.060 [2024-12-09 15:21:42.739005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.739075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ba60 is same with the state(6) to be set 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.060 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:41.060 [2024-12-09 15:21:42.749827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.060 [2024-12-09 15:21:42.749863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.749873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.060 [2024-12-09 15:21:42.749881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.749888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.060 [2024-12-09 15:21:42.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.749902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.060 [2024-12-09 15:21:42.749908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.749915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233caa0 is same with the state(6) to be set 00:28:41.060 [2024-12-09 15:21:42.749963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.749972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.749986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.749993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.060 [2024-12-09 15:21:42.750187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.060 [2024-12-09 15:21:42.750195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.061 [2024-12-09 15:21:42.750779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.061 [2024-12-09 15:21:42.750785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.750897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.062 [2024-12-09 15:21:42.750903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.062 [2024-12-09 15:21:42.751840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting contro 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.062 ller 00:28:41.062 15:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:41.062 task offset: 106496 on job bdev=Nvme0n1 fails 00:28:41.062 00:28:41.062 Latency(us) 00:28:41.062 [2024-12-09T14:21:42.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.062 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.062 Job: Nvme0n1 ended in about 0.42 seconds with error 00:28:41.062 Verification LBA range: start 0x0 length 0x400 00:28:41.062 Nvme0n1 : 0.42 1986.05 124.13 152.77 0.00 29144.51 1575.98 26464.06 00:28:41.062 [2024-12-09T14:21:42.857Z] =================================================================================================================== 00:28:41.062 [2024-12-09T14:21:42.857Z] Total : 1986.05 124.13 152.77 0.00 29144.51 1575.98 26464.06 00:28:41.062 [2024-12-09 15:21:42.754261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:41.062 [2024-12-09 15:21:42.754283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233caa0 (9): Bad file descriptor 00:28:41.062 [2024-12-09 15:21:42.847398] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1611742 00:28:41.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1611742) - No such process 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.996 { 00:28:41.996 "params": { 00:28:41.996 "name": "Nvme$subsystem", 00:28:41.996 "trtype": "$TEST_TRANSPORT", 00:28:41.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.996 "adrfam": "ipv4", 00:28:41.996 "trsvcid": "$NVMF_PORT", 00:28:41.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.996 "hdgst": ${hdgst:-false}, 00:28:41.996 "ddgst": ${ddgst:-false} 00:28:41.996 }, 00:28:41.996 "method": "bdev_nvme_attach_controller" 00:28:41.996 } 00:28:41.996 EOF 00:28:41.996 )") 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:41.996 15:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.996 "params": { 00:28:41.996 "name": "Nvme0", 00:28:41.996 "trtype": "tcp", 00:28:41.996 "traddr": "10.0.0.2", 00:28:41.996 "adrfam": "ipv4", 00:28:41.996 "trsvcid": "4420", 00:28:41.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:41.996 "hdgst": false, 00:28:41.996 "ddgst": false 00:28:41.996 }, 00:28:41.996 "method": "bdev_nvme_attach_controller" 00:28:41.996 }' 00:28:42.255 [2024-12-09 15:21:43.805484] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:42.255 [2024-12-09 15:21:43.805532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612115 ] 00:28:42.255 [2024-12-09 15:21:43.878250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.255 [2024-12-09 15:21:43.915850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.514 Running I/O for 1 seconds... 00:28:43.447 2048.00 IOPS, 128.00 MiB/s 00:28:43.447 Latency(us) 00:28:43.447 [2024-12-09T14:21:45.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.447 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:43.447 Verification LBA range: start 0x0 length 0x400 00:28:43.447 Nvme0n1 : 1.00 2103.71 131.48 0.00 0.00 29939.37 5492.54 26713.72 00:28:43.447 [2024-12-09T14:21:45.242Z] =================================================================================================================== 00:28:43.447 [2024-12-09T14:21:45.242Z] Total : 2103.71 131.48 0.00 0.00 29939.37 5492.54 26713.72 00:28:43.447 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:43.447 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:43.447 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.705 rmmod nvme_tcp 00:28:43.705 rmmod nvme_fabrics 00:28:43.705 rmmod nvme_keyring 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1611696 ']' 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1611696 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1611696 ']' 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1611696 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611696 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611696' 00:28:43.705 killing process with pid 1611696 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1611696 00:28:43.705 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1611696 00:28:43.964 [2024-12-09 15:21:45.527838] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.964 15:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.866 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.866 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:45.866 00:28:45.866 real 0m12.300s 00:28:45.866 user 0m17.796s 00:28:45.866 sys 0m6.320s 00:28:45.866 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.866 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:45.866 ************************************ 00:28:45.866 END TEST nvmf_host_management 00:28:45.866 ************************************ 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:46.125 ************************************ 00:28:46.125 START TEST nvmf_lvol 00:28:46.125 ************************************ 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:46.125 * Looking for test storage... 00:28:46.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:46.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.125 --rc genhtml_branch_coverage=1 00:28:46.125 --rc genhtml_function_coverage=1 00:28:46.125 --rc genhtml_legend=1 00:28:46.125 --rc geninfo_all_blocks=1 00:28:46.125 --rc geninfo_unexecuted_blocks=1 00:28:46.125 00:28:46.125 ' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:46.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.125 --rc genhtml_branch_coverage=1 00:28:46.125 --rc genhtml_function_coverage=1 00:28:46.125 --rc genhtml_legend=1 00:28:46.125 --rc geninfo_all_blocks=1 00:28:46.125 --rc geninfo_unexecuted_blocks=1 00:28:46.125 00:28:46.125 ' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:46.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.125 --rc genhtml_branch_coverage=1 00:28:46.125 --rc genhtml_function_coverage=1 00:28:46.125 --rc genhtml_legend=1 00:28:46.125 --rc geninfo_all_blocks=1 00:28:46.125 --rc geninfo_unexecuted_blocks=1 00:28:46.125 00:28:46.125 ' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:46.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.125 --rc genhtml_branch_coverage=1 00:28:46.125 --rc genhtml_function_coverage=1 00:28:46.125 --rc genhtml_legend=1 00:28:46.125 --rc geninfo_all_blocks=1 00:28:46.125 --rc geninfo_unexecuted_blocks=1 00:28:46.125 00:28:46.125 ' 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.125 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.126 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.384 15:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:52.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:52.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:52.950 Found net devices under 0000:af:00.0: cvl_0_0 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:52.950 Found net devices under 0000:af:00.1: cvl_0_1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.950 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:28:52.951 00:28:52.951 --- 10.0.0.2 ping statistics --- 00:28:52.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.951 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:52.951 00:28:52.951 --- 10.0.0.1 ping statistics --- 00:28:52.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.951 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1615822 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1615822 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1615822 ']' 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.951 15:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:52.951 [2024-12-09 15:21:53.862563] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:52.951 [2024-12-09 15:21:53.863455] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:28:52.951 [2024-12-09 15:21:53.863487] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.951 [2024-12-09 15:21:53.942270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:52.951 [2024-12-09 15:21:53.982026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.951 [2024-12-09 15:21:53.982062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.951 [2024-12-09 15:21:53.982070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.951 [2024-12-09 15:21:53.982076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.951 [2024-12-09 15:21:53.982081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.951 [2024-12-09 15:21:53.983425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.951 [2024-12-09 15:21:53.983535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.951 [2024-12-09 15:21:53.983536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.951 [2024-12-09 15:21:54.050454] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:52.951 [2024-12-09 15:21:54.051234] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:52.951 [2024-12-09 15:21:54.051345] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:52.951 [2024-12-09 15:21:54.051469] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:52.951 [2024-12-09 15:21:54.280314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:52.951 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:53.210 15:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:53.470 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6208d640-4b14-4a44-9fa9-cac37144d433 00:28:53.470 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6208d640-4b14-4a44-9fa9-cac37144d433 lvol 20 00:28:53.728 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e79f9640-9495-4dba-84cd-191176a437cd 00:28:53.728 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:53.986 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e79f9640-9495-4dba-84cd-191176a437cd 00:28:53.986 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:54.243 [2024-12-09 15:21:55.868168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.243 15:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.500 15:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:54.500 15:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1616189 00:28:54.500 15:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:55.431 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e79f9640-9495-4dba-84cd-191176a437cd MY_SNAPSHOT 00:28:55.689 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ed458073-56f8-4de7-a37e-8848c6394d5d 00:28:55.689 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e79f9640-9495-4dba-84cd-191176a437cd 30 00:28:55.947 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ed458073-56f8-4de7-a37e-8848c6394d5d MY_CLONE 00:28:56.205 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1f4c1ca6-d4e7-4a55-8700-c5917bad2050 00:28:56.205 15:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1f4c1ca6-d4e7-4a55-8700-c5917bad2050 00:28:56.771 15:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1616189 00:29:04.882 Initializing NVMe Controllers 00:29:04.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:04.882 Controller IO queue size 128, less than required. 00:29:04.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:04.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:04.882 Initialization complete. Launching workers. 00:29:04.882 ======================================================== 00:29:04.882 Latency(us) 00:29:04.882 Device Information : IOPS MiB/s Average min max 00:29:04.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12690.80 49.57 10090.62 1548.36 54896.41 00:29:04.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12581.40 49.15 10173.90 3584.50 58753.73 00:29:04.882 ======================================================== 00:29:04.882 Total : 25272.20 98.72 10132.08 1548.36 58753.73 00:29:04.882 00:29:04.882 15:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:04.882 15:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e79f9640-9495-4dba-84cd-191176a437cd 00:29:05.140 15:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6208d640-4b14-4a44-9fa9-cac37144d433 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.398 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.398 rmmod nvme_tcp 00:29:05.399 rmmod nvme_fabrics 00:29:05.399 rmmod nvme_keyring 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1615822 ']' 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1615822 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1615822 ']' 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1615822 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615822 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615822' 00:29:05.399 killing process with pid 1615822 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1615822 00:29:05.399 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1615822 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.658 15:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.192 00:29:08.192 real 0m21.708s 00:29:08.192 user 0m55.321s 00:29:08.192 sys 0m9.783s 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 ************************************ 00:29:08.192 END TEST nvmf_lvol 00:29:08.192 ************************************ 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:08.192 ************************************ 00:29:08.192 START TEST nvmf_lvs_grow 00:29:08.192 ************************************ 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:08.192 * Looking for test storage... 00:29:08.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:08.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.192 --rc genhtml_branch_coverage=1 00:29:08.192 --rc genhtml_function_coverage=1 00:29:08.192 --rc genhtml_legend=1 00:29:08.192 --rc geninfo_all_blocks=1 00:29:08.192 --rc geninfo_unexecuted_blocks=1 00:29:08.192 00:29:08.192 ' 00:29:08.192 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.193 --rc genhtml_branch_coverage=1 00:29:08.193 --rc genhtml_function_coverage=1 00:29:08.193 --rc genhtml_legend=1 00:29:08.193 --rc geninfo_all_blocks=1 00:29:08.193 --rc geninfo_unexecuted_blocks=1 00:29:08.193 00:29:08.193 ' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.193 --rc genhtml_branch_coverage=1 00:29:08.193 --rc genhtml_function_coverage=1 00:29:08.193 --rc genhtml_legend=1 00:29:08.193 --rc geninfo_all_blocks=1 00:29:08.193 --rc geninfo_unexecuted_blocks=1 00:29:08.193 00:29:08.193 ' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.193 --rc genhtml_branch_coverage=1 00:29:08.193 --rc genhtml_function_coverage=1 00:29:08.193 --rc genhtml_legend=1 00:29:08.193 --rc geninfo_all_blocks=1 00:29:08.193 --rc geninfo_unexecuted_blocks=1 00:29:08.193 00:29:08.193 ' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.193 15:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:14.760 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:14.760 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:14.760 Found net devices under 0000:af:00.0: cvl_0_0 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.760 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:14.761 Found net devices under 0000:af:00.1: cvl_0_1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:29:14.761 00:29:14.761 --- 10.0.0.2 ping statistics --- 00:29:14.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.761 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:14.761 00:29:14.761 --- 10.0.0.1 ping statistics --- 00:29:14.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.761 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1621488 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1621488 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1621488 ']' 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.761 15:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:14.761 [2024-12-09 15:22:15.675830] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:14.761 [2024-12-09 15:22:15.676736] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:14.761 [2024-12-09 15:22:15.676769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.761 [2024-12-09 15:22:15.767969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.761 [2024-12-09 15:22:15.808044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.761 [2024-12-09 15:22:15.808082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.761 [2024-12-09 15:22:15.808089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.761 [2024-12-09 15:22:15.808095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.761 [2024-12-09 15:22:15.808100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.761 [2024-12-09 15:22:15.808625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.761 [2024-12-09 15:22:15.876728] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:14.761 [2024-12-09 15:22:15.876933] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.761 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:15.021 [2024-12-09 15:22:16.721306] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:15.021 ************************************ 00:29:15.021 START TEST lvs_grow_clean 00:29:15.021 ************************************ 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:15.021 15:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:15.280 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:15.280 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:15.538 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:15.538 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:15.538 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:15.796 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:15.796 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:15.796 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc lvol 150 00:29:16.053 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a45c32c-afe6-4f67-8cc7-f31548b5373e 00:29:16.054 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:16.054 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:16.054 [2024-12-09 15:22:17.777015] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:16.054 [2024-12-09 15:22:17.777140] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:16.054 true 00:29:16.054 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:16.054 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:16.311 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:16.311 15:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:16.569 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a45c32c-afe6-4f67-8cc7-f31548b5373e 00:29:16.569 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:16.828 [2024-12-09 15:22:18.517460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.828 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:17.086 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1621984 00:29:17.086 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.086 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1621984 /var/tmp/bdevperf.sock 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1621984 ']' 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.087 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.087 [2024-12-09 15:22:18.753294] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:17.087 [2024-12-09 15:22:18.753345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621984 ] 00:29:17.087 [2024-12-09 15:22:18.826433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.087 [2024-12-09 15:22:18.866796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.344 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.344 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:17.344 15:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:17.600 Nvme0n1 00:29:17.600 15:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:17.857 [ 00:29:17.857 { 00:29:17.857 "name": "Nvme0n1", 00:29:17.857 "aliases": [ 00:29:17.857 "6a45c32c-afe6-4f67-8cc7-f31548b5373e" 00:29:17.857 ], 00:29:17.857 "product_name": "NVMe disk", 00:29:17.857 "block_size": 4096, 00:29:17.857 "num_blocks": 38912, 00:29:17.857 "uuid": "6a45c32c-afe6-4f67-8cc7-f31548b5373e", 00:29:17.857 "numa_id": 1, 00:29:17.857 "assigned_rate_limits": { 00:29:17.857 "rw_ios_per_sec": 0, 00:29:17.857 "rw_mbytes_per_sec": 0, 00:29:17.857 "r_mbytes_per_sec": 0, 00:29:17.857 "w_mbytes_per_sec": 0 00:29:17.857 }, 00:29:17.857 "claimed": false, 00:29:17.857 "zoned": false, 00:29:17.857 "supported_io_types": { 00:29:17.857 "read": true, 00:29:17.857 "write": true, 00:29:17.857 "unmap": true, 00:29:17.857 "flush": true, 00:29:17.857 "reset": true, 00:29:17.857 "nvme_admin": true, 00:29:17.857 "nvme_io": true, 00:29:17.857 "nvme_io_md": false, 00:29:17.857 "write_zeroes": true, 00:29:17.857 "zcopy": false, 00:29:17.857 "get_zone_info": false, 00:29:17.857 "zone_management": false, 00:29:17.857 "zone_append": false, 00:29:17.857 "compare": true, 00:29:17.857 "compare_and_write": true, 00:29:17.857 "abort": true, 00:29:17.857 "seek_hole": false, 00:29:17.857 "seek_data": false, 00:29:17.857 "copy": true, 00:29:17.857 "nvme_iov_md": false 00:29:17.857 }, 00:29:17.857 "memory_domains": [ 00:29:17.857 { 00:29:17.857 "dma_device_id": "system", 00:29:17.857 "dma_device_type": 1 00:29:17.857 } 00:29:17.857 ], 00:29:17.857 "driver_specific": { 00:29:17.857 "nvme": [ 00:29:17.857 { 00:29:17.857 "trid": { 00:29:17.857 "trtype": "TCP", 00:29:17.857 "adrfam": "IPv4", 00:29:17.857 "traddr": "10.0.0.2", 00:29:17.857 "trsvcid": "4420", 00:29:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:17.857 }, 00:29:17.857 "ctrlr_data": { 00:29:17.857 "cntlid": 1, 00:29:17.857 "vendor_id": "0x8086", 00:29:17.857 "model_number": "SPDK bdev Controller", 00:29:17.857 "serial_number": "SPDK0", 00:29:17.857 "firmware_revision": "25.01", 00:29:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.857 "oacs": { 00:29:17.857 "security": 0, 00:29:17.857 "format": 0, 00:29:17.857 "firmware": 0, 00:29:17.857 "ns_manage": 0 00:29:17.857 }, 00:29:17.857 "multi_ctrlr": true, 00:29:17.857 "ana_reporting": false 00:29:17.857 }, 00:29:17.857 "vs": { 00:29:17.857 "nvme_version": "1.3" 00:29:17.857 }, 00:29:17.857 "ns_data": { 00:29:17.857 "id": 1, 00:29:17.858 "can_share": true 00:29:17.858 } 00:29:17.858 } 00:29:17.858 ], 00:29:17.858 "mp_policy": "active_passive" 00:29:17.858 } 00:29:17.858 } 00:29:17.858 ] 00:29:17.858 15:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1622023 00:29:17.858 15:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:17.858 15:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:17.858 Running I/O for 10 seconds... 00:29:19.229 Latency(us) 00:29:19.229 [2024-12-09T14:22:21.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.229 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:19.229 [2024-12-09T14:22:21.024Z] =================================================================================================================== 00:29:19.229 [2024-12-09T14:22:21.024Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:19.229 00:29:19.796 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:20.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.068 Nvme0n1 : 2.00 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:29:20.068 [2024-12-09T14:22:21.863Z] =================================================================================================================== 00:29:20.068 [2024-12-09T14:22:21.863Z] Total : 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:29:20.068 00:29:20.068 true 00:29:20.068 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:20.068 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:20.360 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:20.360 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:20.360 15:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1622023 00:29:21.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.058 Nvme0n1 : 3.00 23194.00 90.60 0.00 0.00 0.00 0.00 0.00 00:29:21.058 [2024-12-09T14:22:22.853Z] =================================================================================================================== 00:29:21.058 [2024-12-09T14:22:22.853Z] Total : 23194.00 90.60 0.00 0.00 0.00 0.00 0.00 00:29:21.058 00:29:21.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.990 Nvme0n1 : 4.00 23332.75 91.14 0.00 0.00 0.00 0.00 0.00 00:29:21.990 [2024-12-09T14:22:23.785Z] =================================================================================================================== 00:29:21.990 [2024-12-09T14:22:23.785Z] Total : 23332.75 91.14 0.00 0.00 0.00 0.00 0.00 00:29:21.990 00:29:22.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.923 Nvme0n1 : 5.00 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:29:22.923 [2024-12-09T14:22:24.718Z] =================================================================================================================== 00:29:22.923 [2024-12-09T14:22:24.718Z] Total : 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:29:22.923 00:29:23.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.862 Nvme0n1 : 6.00 23492.67 91.77 0.00 0.00 0.00 0.00 0.00 00:29:23.862 [2024-12-09T14:22:25.657Z] =================================================================================================================== 00:29:23.862 [2024-12-09T14:22:25.657Z] Total : 23492.67 91.77 0.00 0.00 0.00 0.00 0.00 00:29:23.862 00:29:25.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.233 Nvme0n1 : 7.00 23547.43 91.98 0.00 0.00 0.00 0.00 0.00 00:29:25.233 [2024-12-09T14:22:27.028Z] =================================================================================================================== 00:29:25.233 [2024-12-09T14:22:27.028Z] Total : 23547.43 91.98 0.00 0.00 0.00 0.00 0.00 00:29:25.233 00:29:26.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.163 Nvme0n1 : 8.00 23588.50 92.14 0.00 0.00 0.00 0.00 0.00 00:29:26.163 [2024-12-09T14:22:27.958Z] =================================================================================================================== 00:29:26.163 [2024-12-09T14:22:27.958Z] Total : 23588.50 92.14 0.00 0.00 0.00 0.00 0.00 00:29:26.163 00:29:27.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.094 Nvme0n1 : 9.00 23620.44 92.27 0.00 0.00 0.00 0.00 0.00 00:29:27.094 [2024-12-09T14:22:28.889Z] =================================================================================================================== 00:29:27.094 [2024-12-09T14:22:28.889Z] Total : 23620.44 92.27 0.00 0.00 0.00 0.00 0.00 00:29:27.094 00:29:28.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.026 Nvme0n1 : 10.00 23646.00 92.37 0.00 0.00 0.00 0.00 0.00 00:29:28.026 [2024-12-09T14:22:29.821Z] =================================================================================================================== 00:29:28.026 [2024-12-09T14:22:29.821Z] Total : 23646.00 92.37 0.00 0.00 0.00 0.00 0.00 00:29:28.026 00:29:28.026 00:29:28.026 Latency(us) 00:29:28.026 [2024-12-09T14:22:29.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.026 Nvme0n1 : 10.01 23644.27 92.36 0.00 0.00 5410.50 3229.99 26214.40 00:29:28.026 [2024-12-09T14:22:29.821Z] =================================================================================================================== 00:29:28.026 [2024-12-09T14:22:29.821Z] Total : 23644.27 92.36 0.00 0.00 5410.50 3229.99 26214.40 00:29:28.026 { 00:29:28.026 "results": [ 00:29:28.026 { 00:29:28.026 "job": "Nvme0n1", 00:29:28.026 "core_mask": "0x2", 00:29:28.026 "workload": "randwrite", 00:29:28.026 "status": "finished", 00:29:28.026 "queue_depth": 128, 00:29:28.026 "io_size": 4096, 00:29:28.026 "runtime": 10.006147, 00:29:28.026 "iops": 23644.265869769853, 00:29:28.026 "mibps": 92.36041355378849, 00:29:28.026 "io_failed": 0, 00:29:28.026 "io_timeout": 0, 00:29:28.026 "avg_latency_us": 5410.496333219814, 00:29:28.026 "min_latency_us": 3229.9885714285715, 00:29:28.026 "max_latency_us": 26214.4 00:29:28.026 } 00:29:28.026 ], 00:29:28.027 "core_count": 1 00:29:28.027 } 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1621984 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1621984 ']' 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1621984 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1621984 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1621984' 00:29:28.027 killing process with pid 1621984 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1621984 00:29:28.027 Received shutdown signal, test time was about 10.000000 seconds 00:29:28.027 00:29:28.027 Latency(us) 00:29:28.027 [2024-12-09T14:22:29.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.027 [2024-12-09T14:22:29.822Z] =================================================================================================================== 00:29:28.027 [2024-12-09T14:22:29.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.027 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1621984 00:29:28.285 15:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.286 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:28.544 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:28.544 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:28.802 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:28.802 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:28.802 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:29.061 [2024-12-09 15:22:30.637078] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.061 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.062 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:29.062 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.062 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:29.062 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:29.320 request: 00:29:29.320 { 00:29:29.320 "uuid": "72a28c30-d0b8-4565-8b07-3c45a9f5adfc", 00:29:29.320 "method": "bdev_lvol_get_lvstores", 00:29:29.320 "req_id": 1 00:29:29.320 } 00:29:29.320 Got JSON-RPC error response 00:29:29.320 response: 00:29:29.320 { 00:29:29.320 "code": -19, 00:29:29.320 "message": "No such device" 00:29:29.320 } 00:29:29.320 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:29.320 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:29.320 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:29.320 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:29.320 15:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:29.320 aio_bdev 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a45c32c-afe6-4f67-8cc7-f31548b5373e 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6a45c32c-afe6-4f67-8cc7-f31548b5373e 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:29.320 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:29.579 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a45c32c-afe6-4f67-8cc7-f31548b5373e -t 2000 00:29:29.837 [ 00:29:29.837 { 00:29:29.837 "name": "6a45c32c-afe6-4f67-8cc7-f31548b5373e", 00:29:29.837 "aliases": [ 00:29:29.837 "lvs/lvol" 00:29:29.837 ], 00:29:29.837 "product_name": "Logical Volume", 00:29:29.837 "block_size": 4096, 00:29:29.837 "num_blocks": 38912, 00:29:29.837 "uuid": "6a45c32c-afe6-4f67-8cc7-f31548b5373e", 00:29:29.837 "assigned_rate_limits": { 00:29:29.837 "rw_ios_per_sec": 0, 00:29:29.837 "rw_mbytes_per_sec": 0, 00:29:29.837 "r_mbytes_per_sec": 0, 00:29:29.837 "w_mbytes_per_sec": 0 00:29:29.837 }, 00:29:29.837 "claimed": false, 00:29:29.837 "zoned": false, 00:29:29.837 "supported_io_types": { 00:29:29.837 "read": true, 00:29:29.837 "write": true, 00:29:29.837 "unmap": true, 00:29:29.837 "flush": false, 00:29:29.837 "reset": true, 00:29:29.837 "nvme_admin": false, 00:29:29.837 "nvme_io": false, 00:29:29.837 "nvme_io_md": false, 00:29:29.837 "write_zeroes": true, 00:29:29.837 "zcopy": false, 00:29:29.837 "get_zone_info": false, 00:29:29.837 "zone_management": false, 00:29:29.837 "zone_append": false, 00:29:29.837 "compare": false, 00:29:29.837 "compare_and_write": false, 00:29:29.837 "abort": false, 00:29:29.837 "seek_hole": true, 00:29:29.837 "seek_data": true, 00:29:29.837 "copy": false, 00:29:29.837 "nvme_iov_md": false 00:29:29.837 }, 00:29:29.837 "driver_specific": { 00:29:29.837 "lvol": { 00:29:29.837 "lvol_store_uuid": "72a28c30-d0b8-4565-8b07-3c45a9f5adfc", 00:29:29.837 "base_bdev": "aio_bdev", 00:29:29.837 "thin_provision": false, 00:29:29.837 "num_allocated_clusters": 38, 00:29:29.837 "snapshot": false, 00:29:29.837 "clone": false, 00:29:29.837 "esnap_clone": false 00:29:29.837 } 00:29:29.837 } 00:29:29.837 } 00:29:29.837 ] 00:29:29.837 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:29.837 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:29.837 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:30.095 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:30.095 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:30.095 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:30.095 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:30.095 15:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6a45c32c-afe6-4f67-8cc7-f31548b5373e 00:29:30.353 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72a28c30-d0b8-4565-8b07-3c45a9f5adfc 00:29:30.611 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.869 00:29:30.869 real 0m15.694s 00:29:30.869 user 0m15.233s 00:29:30.869 sys 0m1.465s 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.869 ************************************ 00:29:30.869 END TEST lvs_grow_clean 00:29:30.869 ************************************ 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.869 ************************************ 00:29:30.869 START TEST lvs_grow_dirty 00:29:30.869 ************************************ 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:30.869 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:30.870 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.870 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.870 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:31.128 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:31.128 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:31.386 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ccc3e387-fccd-4014-aece-ae23617123da 00:29:31.386 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:31.386 15:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ccc3e387-fccd-4014-aece-ae23617123da lvol 150 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.644 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:31.902 [2024-12-09 15:22:33.521008] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:31.902 [2024-12-09 15:22:33.521137] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:31.902 true 00:29:31.902 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:31.902 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:32.161 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:32.161 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.161 15:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:32.419 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.677 [2024-12-09 15:22:34.261437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.677 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1624542 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1624542 /var/tmp/bdevperf.sock 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1624542 ']' 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:32.935 [2024-12-09 15:22:34.520152] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:32.935 [2024-12-09 15:22:34.520205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624542 ] 00:29:32.935 [2024-12-09 15:22:34.595149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.935 [2024-12-09 15:22:34.635143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:32.935 15:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:33.499 Nvme0n1 00:29:33.499 15:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:33.499 [ 00:29:33.499 { 00:29:33.499 "name": "Nvme0n1", 00:29:33.499 "aliases": [ 00:29:33.499 "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb" 00:29:33.499 ], 00:29:33.499 "product_name": "NVMe disk", 00:29:33.499 "block_size": 4096, 00:29:33.499 "num_blocks": 38912, 00:29:33.499 "uuid": "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb", 00:29:33.499 "numa_id": 1, 00:29:33.499 "assigned_rate_limits": { 00:29:33.499 "rw_ios_per_sec": 0, 00:29:33.499 "rw_mbytes_per_sec": 0, 00:29:33.499 "r_mbytes_per_sec": 0, 00:29:33.499 "w_mbytes_per_sec": 0 00:29:33.499 }, 00:29:33.499 "claimed": false, 00:29:33.499 "zoned": false, 00:29:33.499 "supported_io_types": { 00:29:33.499 "read": true, 00:29:33.499 "write": true, 00:29:33.499 "unmap": true, 00:29:33.499 "flush": true, 00:29:33.499 "reset": true, 00:29:33.499 "nvme_admin": true, 00:29:33.499 "nvme_io": true, 00:29:33.499 "nvme_io_md": false, 00:29:33.499 "write_zeroes": true, 00:29:33.499 "zcopy": false, 00:29:33.499 "get_zone_info": false, 00:29:33.499 "zone_management": false, 00:29:33.499 "zone_append": false, 00:29:33.499 "compare": true, 00:29:33.499 "compare_and_write": true, 00:29:33.499 "abort": true, 00:29:33.499 "seek_hole": false, 00:29:33.499 "seek_data": false, 00:29:33.499 "copy": true, 00:29:33.499 "nvme_iov_md": false 00:29:33.499 }, 00:29:33.499 "memory_domains": [ 00:29:33.499 { 00:29:33.499 "dma_device_id": "system", 00:29:33.499 "dma_device_type": 1 00:29:33.499 } 00:29:33.499 ], 00:29:33.499 "driver_specific": { 00:29:33.499 "nvme": [ 00:29:33.499 { 00:29:33.499 "trid": { 00:29:33.499 "trtype": "TCP", 00:29:33.499 "adrfam": "IPv4", 00:29:33.499 "traddr": "10.0.0.2", 00:29:33.499 "trsvcid": "4420", 00:29:33.499 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:33.499 }, 00:29:33.499 "ctrlr_data": { 00:29:33.499 "cntlid": 1, 00:29:33.499 "vendor_id": "0x8086", 00:29:33.499 "model_number": "SPDK bdev Controller", 00:29:33.499 "serial_number": "SPDK0", 00:29:33.499 "firmware_revision": "25.01", 00:29:33.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.499 "oacs": { 00:29:33.499 "security": 0, 00:29:33.499 "format": 0, 00:29:33.499 "firmware": 0, 00:29:33.499 "ns_manage": 0 00:29:33.499 }, 00:29:33.499 "multi_ctrlr": true, 00:29:33.499 "ana_reporting": false 00:29:33.499 }, 00:29:33.499 "vs": { 00:29:33.499 "nvme_version": "1.3" 00:29:33.499 }, 00:29:33.499 "ns_data": { 00:29:33.499 "id": 1, 00:29:33.499 "can_share": true 00:29:33.499 } 00:29:33.499 } 00:29:33.499 ], 00:29:33.499 "mp_policy": "active_passive" 00:29:33.499 } 00:29:33.499 } 00:29:33.499 ] 00:29:33.499 15:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1624585 00:29:33.499 15:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:33.499 15:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:33.756 Running I/O for 10 seconds... 00:29:34.687 Latency(us) 00:29:34.687 [2024-12-09T14:22:36.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.687 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:34.687 [2024-12-09T14:22:36.482Z] =================================================================================================================== 00:29:34.687 [2024-12-09T14:22:36.482Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:34.687 00:29:35.618 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:35.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.618 Nvme0n1 : 2.00 23249.50 90.82 0.00 0.00 0.00 0.00 0.00 00:29:35.618 [2024-12-09T14:22:37.413Z] =================================================================================================================== 00:29:35.618 [2024-12-09T14:22:37.413Z] Total : 23249.50 90.82 0.00 0.00 0.00 0.00 0.00 00:29:35.618 00:29:35.875 true 00:29:35.875 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:35.875 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:35.875 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:35.875 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:35.875 15:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1624585 00:29:36.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.806 Nvme0n1 : 3.00 23373.67 91.30 0.00 0.00 0.00 0.00 0.00 00:29:36.806 [2024-12-09T14:22:38.601Z] =================================================================================================================== 00:29:36.806 [2024-12-09T14:22:38.601Z] Total : 23373.67 91.30 0.00 0.00 0.00 0.00 0.00 00:29:36.806 00:29:37.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.735 Nvme0n1 : 4.00 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:29:37.735 [2024-12-09T14:22:39.530Z] =================================================================================================================== 00:29:37.735 [2024-12-09T14:22:39.530Z] Total : 23467.50 91.67 0.00 0.00 0.00 0.00 0.00 00:29:37.735 00:29:38.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.665 Nvme0n1 : 5.00 23501.80 91.80 0.00 0.00 0.00 0.00 0.00 00:29:38.665 [2024-12-09T14:22:40.460Z] =================================================================================================================== 00:29:38.665 [2024-12-09T14:22:40.460Z] Total : 23501.80 91.80 0.00 0.00 0.00 0.00 0.00 00:29:38.665 00:29:39.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.595 Nvme0n1 : 6.00 23553.67 92.01 0.00 0.00 0.00 0.00 0.00 00:29:39.595 [2024-12-09T14:22:41.390Z] =================================================================================================================== 00:29:39.595 [2024-12-09T14:22:41.390Z] Total : 23553.67 92.01 0.00 0.00 0.00 0.00 0.00 00:29:39.595 00:29:40.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.964 Nvme0n1 : 7.00 23597.57 92.18 0.00 0.00 0.00 0.00 0.00 00:29:40.964 [2024-12-09T14:22:42.759Z] =================================================================================================================== 00:29:40.964 [2024-12-09T14:22:42.759Z] Total : 23597.57 92.18 0.00 0.00 0.00 0.00 0.00 00:29:40.964 00:29:41.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.894 Nvme0n1 : 8.00 23632.38 92.31 0.00 0.00 0.00 0.00 0.00 00:29:41.894 [2024-12-09T14:22:43.689Z] =================================================================================================================== 00:29:41.894 [2024-12-09T14:22:43.689Z] Total : 23632.38 92.31 0.00 0.00 0.00 0.00 0.00 00:29:41.894 00:29:42.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.825 Nvme0n1 : 9.00 23659.44 92.42 0.00 0.00 0.00 0.00 0.00 00:29:42.825 [2024-12-09T14:22:44.620Z] =================================================================================================================== 00:29:42.825 [2024-12-09T14:22:44.620Z] Total : 23659.44 92.42 0.00 0.00 0.00 0.00 0.00 00:29:42.825 00:29:43.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.757 Nvme0n1 : 10.00 23681.10 92.50 0.00 0.00 0.00 0.00 0.00 00:29:43.757 [2024-12-09T14:22:45.552Z] =================================================================================================================== 00:29:43.757 [2024-12-09T14:22:45.552Z] Total : 23681.10 92.50 0.00 0.00 0.00 0.00 0.00 00:29:43.757 00:29:43.757 00:29:43.757 Latency(us) 00:29:43.757 [2024-12-09T14:22:45.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.757 Nvme0n1 : 10.00 23684.14 92.52 0.00 0.00 5401.49 3120.76 26963.38 00:29:43.757 [2024-12-09T14:22:45.552Z] =================================================================================================================== 00:29:43.757 [2024-12-09T14:22:45.552Z] Total : 23684.14 92.52 0.00 0.00 5401.49 3120.76 26963.38 00:29:43.757 { 00:29:43.757 "results": [ 00:29:43.757 { 00:29:43.757 "job": "Nvme0n1", 00:29:43.757 "core_mask": "0x2", 00:29:43.757 "workload": "randwrite", 00:29:43.757 "status": "finished", 00:29:43.757 "queue_depth": 128, 00:29:43.757 "io_size": 4096, 00:29:43.757 "runtime": 10.004121, 00:29:43.757 "iops": 23684.13976600243, 00:29:43.757 "mibps": 92.516170960947, 00:29:43.757 "io_failed": 0, 00:29:43.757 "io_timeout": 0, 00:29:43.757 "avg_latency_us": 5401.488513318376, 00:29:43.757 "min_latency_us": 3120.7619047619046, 00:29:43.757 "max_latency_us": 26963.382857142857 00:29:43.757 } 00:29:43.757 ], 00:29:43.757 "core_count": 1 00:29:43.757 } 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1624542 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1624542 ']' 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1624542 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624542 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624542' 00:29:43.757 killing process with pid 1624542 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1624542 00:29:43.757 Received shutdown signal, test time was about 10.000000 seconds 00:29:43.757 00:29:43.757 Latency(us) 00:29:43.757 [2024-12-09T14:22:45.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.757 [2024-12-09T14:22:45.552Z] =================================================================================================================== 00:29:43.757 [2024-12-09T14:22:45.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.757 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1624542 00:29:44.015 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.015 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.273 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:44.273 15:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1621488 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1621488 00:29:44.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1621488 Killed "${NVMF_APP[@]}" "$@" 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1626364 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1626364 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1626364 ']' 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.531 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.531 [2024-12-09 15:22:46.266607] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:44.531 [2024-12-09 15:22:46.267506] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:44.531 [2024-12-09 15:22:46.267542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.790 [2024-12-09 15:22:46.344040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.790 [2024-12-09 15:22:46.382603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.790 [2024-12-09 15:22:46.382637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.790 [2024-12-09 15:22:46.382644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.790 [2024-12-09 15:22:46.382650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.790 [2024-12-09 15:22:46.382656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.790 [2024-12-09 15:22:46.383167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.790 [2024-12-09 15:22:46.449349] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:44.790 [2024-12-09 15:22:46.449538] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.790 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:45.048 [2024-12-09 15:22:46.692579] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:45.048 [2024-12-09 15:22:46.692786] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:45.048 [2024-12-09 15:22:46.692871] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:45.048 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:45.307 15:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb -t 2000 00:29:45.307 [ 00:29:45.307 { 00:29:45.307 "name": "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb", 00:29:45.307 "aliases": [ 00:29:45.307 "lvs/lvol" 00:29:45.307 ], 00:29:45.307 "product_name": "Logical Volume", 00:29:45.307 "block_size": 4096, 00:29:45.307 "num_blocks": 38912, 00:29:45.307 "uuid": "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb", 00:29:45.307 "assigned_rate_limits": { 00:29:45.307 "rw_ios_per_sec": 0, 00:29:45.307 "rw_mbytes_per_sec": 0, 00:29:45.307 "r_mbytes_per_sec": 0, 00:29:45.307 "w_mbytes_per_sec": 0 00:29:45.307 }, 00:29:45.307 "claimed": false, 00:29:45.307 "zoned": false, 00:29:45.307 "supported_io_types": { 00:29:45.307 "read": true, 00:29:45.307 "write": true, 00:29:45.307 "unmap": true, 00:29:45.307 "flush": false, 00:29:45.307 "reset": true, 00:29:45.307 "nvme_admin": false, 00:29:45.307 "nvme_io": false, 00:29:45.307 "nvme_io_md": false, 00:29:45.307 "write_zeroes": true, 00:29:45.307 "zcopy": false, 00:29:45.307 "get_zone_info": false, 00:29:45.307 "zone_management": false, 00:29:45.307 "zone_append": false, 00:29:45.307 "compare": false, 00:29:45.307 "compare_and_write": false, 00:29:45.307 "abort": false, 00:29:45.307 "seek_hole": true, 00:29:45.307 "seek_data": true, 00:29:45.307 "copy": false, 00:29:45.307 "nvme_iov_md": false 00:29:45.307 }, 00:29:45.307 "driver_specific": { 00:29:45.307 "lvol": { 00:29:45.307 "lvol_store_uuid": "ccc3e387-fccd-4014-aece-ae23617123da", 00:29:45.307 "base_bdev": "aio_bdev", 00:29:45.307 "thin_provision": false, 00:29:45.307 "num_allocated_clusters": 38, 00:29:45.307 "snapshot": false, 00:29:45.307 "clone": false, 00:29:45.307 "esnap_clone": false 00:29:45.307 } 00:29:45.307 } 00:29:45.307 } 00:29:45.307 ] 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:45.565 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:45.823 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:45.823 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:46.082 [2024-12-09 15:22:47.667633] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:46.082 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:46.341 request: 00:29:46.341 { 00:29:46.341 "uuid": "ccc3e387-fccd-4014-aece-ae23617123da", 00:29:46.341 "method": "bdev_lvol_get_lvstores", 00:29:46.341 "req_id": 1 00:29:46.341 } 00:29:46.341 Got JSON-RPC error response 00:29:46.341 response: 00:29:46.341 { 00:29:46.341 "code": -19, 00:29:46.341 "message": "No such device" 00:29:46.341 } 00:29:46.341 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:46.341 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:46.341 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:46.341 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:46.341 15:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:46.341 aio_bdev 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:46.341 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:46.599 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb -t 2000 00:29:46.858 [ 00:29:46.858 { 00:29:46.858 "name": "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb", 00:29:46.858 "aliases": [ 00:29:46.858 "lvs/lvol" 00:29:46.858 ], 00:29:46.858 "product_name": "Logical Volume", 00:29:46.858 "block_size": 4096, 00:29:46.858 "num_blocks": 38912, 00:29:46.858 "uuid": "19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb", 00:29:46.858 "assigned_rate_limits": { 00:29:46.858 "rw_ios_per_sec": 0, 00:29:46.858 "rw_mbytes_per_sec": 0, 00:29:46.858 "r_mbytes_per_sec": 0, 00:29:46.858 "w_mbytes_per_sec": 0 00:29:46.858 }, 00:29:46.858 "claimed": false, 00:29:46.858 "zoned": false, 00:29:46.858 "supported_io_types": { 00:29:46.858 "read": true, 00:29:46.858 "write": true, 00:29:46.858 "unmap": true, 00:29:46.858 "flush": false, 00:29:46.858 "reset": true, 00:29:46.858 "nvme_admin": false, 00:29:46.858 "nvme_io": false, 00:29:46.858 "nvme_io_md": false, 00:29:46.858 "write_zeroes": true, 00:29:46.858 "zcopy": false, 00:29:46.858 "get_zone_info": false, 00:29:46.858 "zone_management": false, 00:29:46.858 "zone_append": false, 00:29:46.858 "compare": false, 00:29:46.858 "compare_and_write": false, 00:29:46.858 "abort": false, 00:29:46.858 "seek_hole": true, 00:29:46.858 "seek_data": true, 00:29:46.858 "copy": false, 00:29:46.858 "nvme_iov_md": false 00:29:46.858 }, 00:29:46.858 "driver_specific": { 00:29:46.858 "lvol": { 00:29:46.858 "lvol_store_uuid": "ccc3e387-fccd-4014-aece-ae23617123da", 00:29:46.858 "base_bdev": "aio_bdev", 00:29:46.858 "thin_provision": false, 00:29:46.858 "num_allocated_clusters": 38, 00:29:46.858 "snapshot": false, 00:29:46.858 "clone": false, 00:29:46.858 "esnap_clone": false 00:29:46.858 } 00:29:46.858 } 00:29:46.858 } 00:29:46.858 ] 00:29:46.858 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:46.858 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:46.858 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:47.117 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:47.117 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:47.117 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:47.117 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:47.117 15:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19b3e6c4-e5f1-4636-ad31-2ed88f8ee5cb 00:29:47.375 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccc3e387-fccd-4014-aece-ae23617123da 00:29:47.633 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:47.892 00:29:47.892 real 0m16.927s 00:29:47.892 user 0m34.297s 00:29:47.892 sys 0m3.873s 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:47.892 ************************************ 00:29:47.892 END TEST lvs_grow_dirty 00:29:47.892 ************************************ 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:47.892 nvmf_trace.0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.892 rmmod nvme_tcp 00:29:47.892 rmmod nvme_fabrics 00:29:47.892 rmmod nvme_keyring 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1626364 ']' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1626364 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1626364 ']' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1626364 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.892 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1626364 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1626364' 00:29:48.152 killing process with pid 1626364 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1626364 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1626364 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.152 15:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.687 15:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.687 00:29:50.687 real 0m42.485s 00:29:50.687 user 0m52.225s 00:29:50.687 sys 0m10.230s 00:29:50.687 15:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.687 15:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:50.687 ************************************ 00:29:50.687 END TEST nvmf_lvs_grow 00:29:50.687 ************************************ 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.688 ************************************ 00:29:50.688 START TEST nvmf_bdev_io_wait 00:29:50.688 ************************************ 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:50.688 * Looking for test storage... 00:29:50.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.688 --rc genhtml_branch_coverage=1 00:29:50.688 --rc genhtml_function_coverage=1 00:29:50.688 --rc genhtml_legend=1 00:29:50.688 --rc geninfo_all_blocks=1 00:29:50.688 --rc geninfo_unexecuted_blocks=1 00:29:50.688 00:29:50.688 ' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.688 --rc genhtml_branch_coverage=1 00:29:50.688 --rc genhtml_function_coverage=1 00:29:50.688 --rc genhtml_legend=1 00:29:50.688 --rc geninfo_all_blocks=1 00:29:50.688 --rc geninfo_unexecuted_blocks=1 00:29:50.688 00:29:50.688 ' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.688 --rc genhtml_branch_coverage=1 00:29:50.688 --rc genhtml_function_coverage=1 00:29:50.688 --rc genhtml_legend=1 00:29:50.688 --rc geninfo_all_blocks=1 00:29:50.688 --rc geninfo_unexecuted_blocks=1 00:29:50.688 00:29:50.688 ' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.688 --rc genhtml_branch_coverage=1 00:29:50.688 --rc genhtml_function_coverage=1 00:29:50.688 --rc genhtml_legend=1 00:29:50.688 --rc geninfo_all_blocks=1 00:29:50.688 --rc geninfo_unexecuted_blocks=1 00:29:50.688 00:29:50.688 ' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.688 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.689 15:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:56.082 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:56.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:56.082 Found net devices under 0000:af:00.0: cvl_0_0 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:56.082 Found net devices under 0000:af:00.1: cvl_0_1 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.082 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.352 15:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.352 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.352 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.352 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:29:56.611 00:29:56.611 --- 10.0.0.2 ping statistics --- 00:29:56.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.611 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:29:56.611 00:29:56.611 --- 10.0.0.1 ping statistics --- 00:29:56.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.611 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1630392 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1630392 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1630392 ']' 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.611 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.611 [2024-12-09 15:22:58.284714] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:56.611 [2024-12-09 15:22:58.285595] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:56.611 [2024-12-09 15:22:58.285629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.611 [2024-12-09 15:22:58.363526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.611 [2024-12-09 15:22:58.403842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.611 [2024-12-09 15:22:58.403880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.611 [2024-12-09 15:22:58.403888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.611 [2024-12-09 15:22:58.403894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.611 [2024-12-09 15:22:58.403899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.611 [2024-12-09 15:22:58.405380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.611 [2024-12-09 15:22:58.405488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.611 [2024-12-09 15:22:58.405525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.611 [2024-12-09 15:22:58.405526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.870 [2024-12-09 15:22:58.405962] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 [2024-12-09 15:22:58.545638] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:56.870 [2024-12-09 15:22:58.545819] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:56.870 [2024-12-09 15:22:58.546113] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:56.870 [2024-12-09 15:22:58.546268] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 [2024-12-09 15:22:58.558345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 Malloc0 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:56.870 [2024-12-09 15:22:58.630648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1630615 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1630617 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.870 { 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme$subsystem", 00:29:56.870 "trtype": "$TEST_TRANSPORT", 00:29:56.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.870 "adrfam": "ipv4", 00:29:56.870 "trsvcid": "$NVMF_PORT", 00:29:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.870 "hdgst": ${hdgst:-false}, 00:29:56.870 "ddgst": ${ddgst:-false} 00:29:56.870 }, 00:29:56.870 "method": "bdev_nvme_attach_controller" 00:29:56.870 } 00:29:56.870 EOF 00:29:56.870 )") 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1630619 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.870 { 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme$subsystem", 00:29:56.870 "trtype": "$TEST_TRANSPORT", 00:29:56.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.870 "adrfam": "ipv4", 00:29:56.870 "trsvcid": "$NVMF_PORT", 00:29:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.870 "hdgst": ${hdgst:-false}, 00:29:56.870 "ddgst": ${ddgst:-false} 00:29:56.870 }, 00:29:56.870 "method": "bdev_nvme_attach_controller" 00:29:56.870 } 00:29:56.870 EOF 00:29:56.870 )") 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1630622 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.870 { 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme$subsystem", 00:29:56.870 "trtype": "$TEST_TRANSPORT", 00:29:56.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.870 "adrfam": "ipv4", 00:29:56.870 "trsvcid": "$NVMF_PORT", 00:29:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.870 "hdgst": ${hdgst:-false}, 00:29:56.870 "ddgst": ${ddgst:-false} 00:29:56.870 }, 00:29:56.870 "method": "bdev_nvme_attach_controller" 00:29:56.870 } 00:29:56.870 EOF 00:29:56.870 )") 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:56.870 { 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme$subsystem", 00:29:56.870 "trtype": "$TEST_TRANSPORT", 00:29:56.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.870 "adrfam": "ipv4", 00:29:56.870 "trsvcid": "$NVMF_PORT", 00:29:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.870 "hdgst": ${hdgst:-false}, 00:29:56.870 "ddgst": ${ddgst:-false} 00:29:56.870 }, 00:29:56.870 "method": "bdev_nvme_attach_controller" 00:29:56.870 } 00:29:56.870 EOF 00:29:56.870 )") 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1630615 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme1", 00:29:56.870 "trtype": "tcp", 00:29:56.870 "traddr": "10.0.0.2", 00:29:56.870 "adrfam": "ipv4", 00:29:56.870 "trsvcid": "4420", 00:29:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.870 "hdgst": false, 00:29:56.870 "ddgst": false 00:29:56.870 }, 00:29:56.870 "method": "bdev_nvme_attach_controller" 00:29:56.870 }' 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:56.870 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.870 "params": { 00:29:56.870 "name": "Nvme1", 00:29:56.870 "trtype": "tcp", 00:29:56.870 "traddr": "10.0.0.2", 00:29:56.871 "adrfam": "ipv4", 00:29:56.871 "trsvcid": "4420", 00:29:56.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.871 "hdgst": false, 00:29:56.871 "ddgst": false 00:29:56.871 }, 00:29:56.871 "method": "bdev_nvme_attach_controller" 00:29:56.871 }' 00:29:56.871 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.871 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.871 "params": { 00:29:56.871 "name": "Nvme1", 00:29:56.871 "trtype": "tcp", 00:29:56.871 "traddr": "10.0.0.2", 00:29:56.871 "adrfam": "ipv4", 00:29:56.871 "trsvcid": "4420", 00:29:56.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.871 "hdgst": false, 00:29:56.871 "ddgst": false 00:29:56.871 }, 00:29:56.871 "method": "bdev_nvme_attach_controller" 00:29:56.871 }' 00:29:56.871 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:56.871 15:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:56.871 "params": { 00:29:56.871 "name": "Nvme1", 00:29:56.871 "trtype": "tcp", 00:29:56.871 "traddr": "10.0.0.2", 00:29:56.871 "adrfam": "ipv4", 00:29:56.871 "trsvcid": "4420", 00:29:56.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.871 "hdgst": false, 00:29:56.871 "ddgst": false 00:29:56.871 }, 00:29:56.871 "method": "bdev_nvme_attach_controller" 00:29:56.871 }' 00:29:57.128 [2024-12-09 15:22:58.682689] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:57.128 [2024-12-09 15:22:58.682743] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:57.128 [2024-12-09 15:22:58.684125] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:57.128 [2024-12-09 15:22:58.684169] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:57.128 [2024-12-09 15:22:58.686239] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:57.128 [2024-12-09 15:22:58.686287] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:57.128 [2024-12-09 15:22:58.692807] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:29:57.128 [2024-12-09 15:22:58.692880] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:57.128 [2024-12-09 15:22:58.868409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.128 [2024-12-09 15:22:58.913175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.385 [2024-12-09 15:22:58.960246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.385 [2024-12-09 15:22:59.004498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:57.385 [2024-12-09 15:22:59.053642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.385 [2024-12-09 15:22:59.108550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.385 [2024-12-09 15:22:59.108758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.385 [2024-12-09 15:22:59.150254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.642 Running I/O for 1 seconds... 00:29:57.642 Running I/O for 1 seconds... 00:29:57.642 Running I/O for 1 seconds... 00:29:57.642 Running I/O for 1 seconds... 00:29:58.573 13648.00 IOPS, 53.31 MiB/s 00:29:58.573 Latency(us) 00:29:58.573 [2024-12-09T14:23:00.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.573 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:58.573 Nvme1n1 : 1.01 13715.31 53.58 0.00 0.00 9307.65 3308.01 10735.42 00:29:58.573 [2024-12-09T14:23:00.368Z] =================================================================================================================== 00:29:58.573 [2024-12-09T14:23:00.368Z] Total : 13715.31 53.58 0.00 0.00 9307.65 3308.01 10735.42 00:29:58.573 10511.00 IOPS, 41.06 MiB/s [2024-12-09T14:23:00.368Z] 241952.00 IOPS, 945.12 MiB/s 00:29:58.573 Latency(us) 00:29:58.573 [2024-12-09T14:23:00.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.573 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:58.573 Nvme1n1 : 1.01 10584.55 41.35 0.00 0.00 12054.62 1521.37 14854.83 00:29:58.573 [2024-12-09T14:23:00.368Z] =================================================================================================================== 00:29:58.573 [2024-12-09T14:23:00.368Z] Total : 10584.55 41.35 0.00 0.00 12054.62 1521.37 14854.83 00:29:58.573 00:29:58.573 Latency(us) 00:29:58.573 [2024-12-09T14:23:00.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.573 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:58.574 Nvme1n1 : 1.00 241592.30 943.72 0.00 0.00 527.22 218.45 1490.16 00:29:58.574 [2024-12-09T14:23:00.369Z] =================================================================================================================== 00:29:58.574 [2024-12-09T14:23:00.369Z] Total : 241592.30 943.72 0.00 0.00 527.22 218.45 1490.16 00:29:58.830 10675.00 IOPS, 41.70 MiB/s 00:29:58.830 Latency(us) 00:29:58.830 [2024-12-09T14:23:00.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.830 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:58.830 Nvme1n1 : 1.01 10749.53 41.99 0.00 0.00 11873.76 4306.65 16976.94 00:29:58.830 [2024-12-09T14:23:00.625Z] =================================================================================================================== 00:29:58.830 [2024-12-09T14:23:00.625Z] Total : 10749.53 41.99 0.00 0.00 11873.76 4306.65 16976.94 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1630617 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1630619 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1630622 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.830 rmmod nvme_tcp 00:29:58.830 rmmod nvme_fabrics 00:29:58.830 rmmod nvme_keyring 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1630392 ']' 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1630392 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1630392 ']' 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1630392 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.830 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1630392 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1630392' 00:29:59.089 killing process with pid 1630392 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1630392 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1630392 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.089 15:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.623 00:30:01.623 real 0m10.822s 00:30:01.623 user 0m15.027s 00:30:01.623 sys 0m6.525s 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:01.623 ************************************ 00:30:01.623 END TEST nvmf_bdev_io_wait 00:30:01.623 ************************************ 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.623 ************************************ 00:30:01.623 START TEST nvmf_queue_depth 00:30:01.623 ************************************ 00:30:01.623 15:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:01.623 * Looking for test storage... 00:30:01.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.623 --rc genhtml_branch_coverage=1 00:30:01.623 --rc genhtml_function_coverage=1 00:30:01.623 --rc genhtml_legend=1 00:30:01.623 --rc geninfo_all_blocks=1 00:30:01.623 --rc geninfo_unexecuted_blocks=1 00:30:01.623 00:30:01.623 ' 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.623 --rc genhtml_branch_coverage=1 00:30:01.623 --rc genhtml_function_coverage=1 00:30:01.623 --rc genhtml_legend=1 00:30:01.623 --rc geninfo_all_blocks=1 00:30:01.623 --rc geninfo_unexecuted_blocks=1 00:30:01.623 00:30:01.623 ' 00:30:01.623 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.624 --rc genhtml_branch_coverage=1 00:30:01.624 --rc genhtml_function_coverage=1 00:30:01.624 --rc genhtml_legend=1 00:30:01.624 --rc geninfo_all_blocks=1 00:30:01.624 --rc geninfo_unexecuted_blocks=1 00:30:01.624 00:30:01.624 ' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.624 --rc genhtml_branch_coverage=1 00:30:01.624 --rc genhtml_function_coverage=1 00:30:01.624 --rc genhtml_legend=1 00:30:01.624 --rc geninfo_all_blocks=1 00:30:01.624 --rc geninfo_unexecuted_blocks=1 00:30:01.624 00:30:01.624 ' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.624 15:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:08.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:08.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:08.191 Found net devices under 0000:af:00.0: cvl_0_0 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:08.191 Found net devices under 0000:af:00.1: cvl_0_1 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.191 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.192 15:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:30:08.192 00:30:08.192 --- 10.0.0.2 ping statistics --- 00:30:08.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.192 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:08.192 00:30:08.192 --- 10.0.0.1 ping statistics --- 00:30:08.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.192 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1634362 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1634362 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1634362 ']' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 [2024-12-09 15:23:09.122123] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.192 [2024-12-09 15:23:09.123095] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:08.192 [2024-12-09 15:23:09.123133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.192 [2024-12-09 15:23:09.205464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.192 [2024-12-09 15:23:09.243280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.192 [2024-12-09 15:23:09.243317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.192 [2024-12-09 15:23:09.243324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.192 [2024-12-09 15:23:09.243330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.192 [2024-12-09 15:23:09.243336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.192 [2024-12-09 15:23:09.243816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.192 [2024-12-09 15:23:09.310314] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.192 [2024-12-09 15:23:09.310510] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 [2024-12-09 15:23:09.384570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 Malloc0 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 [2024-12-09 15:23:09.468636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1634390 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1634390 /var/tmp/bdevperf.sock 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1634390 ']' 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.192 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.192 [2024-12-09 15:23:09.521657] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:08.193 [2024-12-09 15:23:09.521704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634390 ] 00:30:08.193 [2024-12-09 15:23:09.595222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.193 [2024-12-09 15:23:09.634206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:08.193 NVMe0n1 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.193 15:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:08.451 Running I/O for 10 seconds... 00:30:10.315 12265.00 IOPS, 47.91 MiB/s [2024-12-09T14:23:13.494Z] 12289.00 IOPS, 48.00 MiB/s [2024-12-09T14:23:14.427Z] 12290.33 IOPS, 48.01 MiB/s [2024-12-09T14:23:15.359Z] 12333.50 IOPS, 48.18 MiB/s [2024-12-09T14:23:16.291Z] 12478.20 IOPS, 48.74 MiB/s [2024-12-09T14:23:17.223Z] 12463.33 IOPS, 48.68 MiB/s [2024-12-09T14:23:18.155Z] 12498.57 IOPS, 48.82 MiB/s [2024-12-09T14:23:19.087Z] 12544.75 IOPS, 49.00 MiB/s [2024-12-09T14:23:20.457Z] 12581.00 IOPS, 49.14 MiB/s [2024-12-09T14:23:20.457Z] 12591.70 IOPS, 49.19 MiB/s 00:30:18.662 Latency(us) 00:30:18.662 [2024-12-09T14:23:20.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.662 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:18.662 Verification LBA range: start 0x0 length 0x4000 00:30:18.662 NVMe0n1 : 10.06 12617.32 49.29 0.00 0.00 80894.46 19723.22 52928.12 00:30:18.662 [2024-12-09T14:23:20.457Z] =================================================================================================================== 00:30:18.662 [2024-12-09T14:23:20.457Z] Total : 12617.32 49.29 0.00 0.00 80894.46 19723.22 52928.12 00:30:18.662 { 00:30:18.662 "results": [ 00:30:18.662 { 00:30:18.662 "job": "NVMe0n1", 00:30:18.662 "core_mask": "0x1", 00:30:18.662 "workload": "verify", 00:30:18.662 "status": "finished", 00:30:18.662 "verify_range": { 00:30:18.662 "start": 0, 00:30:18.662 "length": 16384 00:30:18.662 }, 00:30:18.662 "queue_depth": 1024, 00:30:18.662 "io_size": 4096, 00:30:18.662 "runtime": 10.060222, 00:30:18.662 "iops": 12617.315999587285, 00:30:18.662 "mibps": 49.28639062338783, 00:30:18.662 "io_failed": 0, 00:30:18.662 "io_timeout": 0, 00:30:18.662 "avg_latency_us": 80894.4565126334, 00:30:18.662 "min_latency_us": 19723.21523809524, 00:30:18.662 "max_latency_us": 52928.1219047619 00:30:18.662 } 00:30:18.662 ], 00:30:18.662 "core_count": 1 00:30:18.662 } 00:30:18.662 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1634390 00:30:18.662 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1634390 ']' 00:30:18.662 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1634390 00:30:18.662 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:18.662 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1634390 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1634390' 00:30:18.663 killing process with pid 1634390 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1634390 00:30:18.663 Received shutdown signal, test time was about 10.000000 seconds 00:30:18.663 00:30:18.663 Latency(us) 00:30:18.663 [2024-12-09T14:23:20.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.663 [2024-12-09T14:23:20.458Z] =================================================================================================================== 00:30:18.663 [2024-12-09T14:23:20.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1634390 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.663 rmmod nvme_tcp 00:30:18.663 rmmod nvme_fabrics 00:30:18.663 rmmod nvme_keyring 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1634362 ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1634362 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1634362 ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1634362 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.663 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1634362 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1634362' 00:30:18.921 killing process with pid 1634362 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1634362 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1634362 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.921 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.922 15:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.456 00:30:21.456 real 0m19.798s 00:30:21.456 user 0m22.987s 00:30:21.456 sys 0m6.170s 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:21.456 ************************************ 00:30:21.456 END TEST nvmf_queue_depth 00:30:21.456 ************************************ 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:21.456 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:21.457 ************************************ 00:30:21.457 START TEST nvmf_target_multipath 00:30:21.457 ************************************ 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:21.457 * Looking for test storage... 00:30:21.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:21.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.457 --rc genhtml_branch_coverage=1 00:30:21.457 --rc genhtml_function_coverage=1 00:30:21.457 --rc genhtml_legend=1 00:30:21.457 --rc geninfo_all_blocks=1 00:30:21.457 --rc geninfo_unexecuted_blocks=1 00:30:21.457 00:30:21.457 ' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:21.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.457 --rc genhtml_branch_coverage=1 00:30:21.457 --rc genhtml_function_coverage=1 00:30:21.457 --rc genhtml_legend=1 00:30:21.457 --rc geninfo_all_blocks=1 00:30:21.457 --rc geninfo_unexecuted_blocks=1 00:30:21.457 00:30:21.457 ' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:21.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.457 --rc genhtml_branch_coverage=1 00:30:21.457 --rc genhtml_function_coverage=1 00:30:21.457 --rc genhtml_legend=1 00:30:21.457 --rc geninfo_all_blocks=1 00:30:21.457 --rc geninfo_unexecuted_blocks=1 00:30:21.457 00:30:21.457 ' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:21.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.457 --rc genhtml_branch_coverage=1 00:30:21.457 --rc genhtml_function_coverage=1 00:30:21.457 --rc genhtml_legend=1 00:30:21.457 --rc geninfo_all_blocks=1 00:30:21.457 --rc geninfo_unexecuted_blocks=1 00:30:21.457 00:30:21.457 ' 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.457 15:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:21.457 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.458 15:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.024 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:28.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:28.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:28.025 Found net devices under 0000:af:00.0: cvl_0_0 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:28.025 Found net devices under 0000:af:00.1: cvl_0_1 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.025 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:30:28.026 00:30:28.026 --- 10.0.0.2 ping statistics --- 00:30:28.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.026 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:30:28.026 00:30:28.026 --- 10.0.0.1 ping statistics --- 00:30:28.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.026 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:28.026 only one NIC for nvmf test 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:28.026 rmmod nvme_tcp 00:30:28.026 rmmod nvme_fabrics 00:30:28.026 rmmod nvme_keyring 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.026 15:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.403 15:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.403 00:30:29.403 real 0m8.237s 00:30:29.403 user 0m1.756s 00:30:29.403 sys 0m4.482s 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:29.403 ************************************ 00:30:29.403 END TEST nvmf_target_multipath 00:30:29.403 ************************************ 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:29.403 ************************************ 00:30:29.403 START TEST nvmf_zcopy 00:30:29.403 ************************************ 00:30:29.403 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:29.663 * Looking for test storage... 00:30:29.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:29.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.663 --rc genhtml_branch_coverage=1 00:30:29.663 --rc genhtml_function_coverage=1 00:30:29.663 --rc genhtml_legend=1 00:30:29.663 --rc geninfo_all_blocks=1 00:30:29.663 --rc geninfo_unexecuted_blocks=1 00:30:29.663 00:30:29.663 ' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:29.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.663 --rc genhtml_branch_coverage=1 00:30:29.663 --rc genhtml_function_coverage=1 00:30:29.663 --rc genhtml_legend=1 00:30:29.663 --rc geninfo_all_blocks=1 00:30:29.663 --rc geninfo_unexecuted_blocks=1 00:30:29.663 00:30:29.663 ' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:29.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.663 --rc genhtml_branch_coverage=1 00:30:29.663 --rc genhtml_function_coverage=1 00:30:29.663 --rc genhtml_legend=1 00:30:29.663 --rc geninfo_all_blocks=1 00:30:29.663 --rc geninfo_unexecuted_blocks=1 00:30:29.663 00:30:29.663 ' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:29.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.663 --rc genhtml_branch_coverage=1 00:30:29.663 --rc genhtml_function_coverage=1 00:30:29.663 --rc genhtml_legend=1 00:30:29.663 --rc geninfo_all_blocks=1 00:30:29.663 --rc geninfo_unexecuted_blocks=1 00:30:29.663 00:30:29.663 ' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.663 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.664 15:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:36.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:36.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:36.231 Found net devices under 0000:af:00.0: cvl_0_0 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:36.231 Found net devices under 0000:af:00.1: cvl_0_1 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.231 15:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.231 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.231 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.231 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:30:36.232 00:30:36.232 --- 10.0.0.2 ping statistics --- 00:30:36.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.232 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:30:36.232 00:30:36.232 --- 10.0.0.1 ping statistics --- 00:30:36.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.232 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1643005 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1643005 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1643005 ']' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 [2024-12-09 15:23:37.341443] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:36.232 [2024-12-09 15:23:37.342352] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:36.232 [2024-12-09 15:23:37.342384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.232 [2024-12-09 15:23:37.420953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.232 [2024-12-09 15:23:37.459569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.232 [2024-12-09 15:23:37.459604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.232 [2024-12-09 15:23:37.459611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.232 [2024-12-09 15:23:37.459617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.232 [2024-12-09 15:23:37.459622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.232 [2024-12-09 15:23:37.460119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.232 [2024-12-09 15:23:37.526856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:36.232 [2024-12-09 15:23:37.527060] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 [2024-12-09 15:23:37.592803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 [2024-12-09 15:23:37.621037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 malloc0 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:36.232 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:36.232 { 00:30:36.233 "params": { 00:30:36.233 "name": "Nvme$subsystem", 00:30:36.233 "trtype": "$TEST_TRANSPORT", 00:30:36.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.233 "adrfam": "ipv4", 00:30:36.233 "trsvcid": "$NVMF_PORT", 00:30:36.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.233 "hdgst": ${hdgst:-false}, 00:30:36.233 "ddgst": ${ddgst:-false} 00:30:36.233 }, 00:30:36.233 "method": "bdev_nvme_attach_controller" 00:30:36.233 } 00:30:36.233 EOF 00:30:36.233 )") 00:30:36.233 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:36.233 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:36.233 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:36.233 15:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:36.233 "params": { 00:30:36.233 "name": "Nvme1", 00:30:36.233 "trtype": "tcp", 00:30:36.233 "traddr": "10.0.0.2", 00:30:36.233 "adrfam": "ipv4", 00:30:36.233 "trsvcid": "4420", 00:30:36.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:36.233 "hdgst": false, 00:30:36.233 "ddgst": false 00:30:36.233 }, 00:30:36.233 "method": "bdev_nvme_attach_controller" 00:30:36.233 }' 00:30:36.233 [2024-12-09 15:23:37.716422] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:36.233 [2024-12-09 15:23:37.716464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643184 ] 00:30:36.233 [2024-12-09 15:23:37.787527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.233 [2024-12-09 15:23:37.826596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.233 Running I/O for 10 seconds... 00:30:38.542 8581.00 IOPS, 67.04 MiB/s [2024-12-09T14:23:41.272Z] 8641.00 IOPS, 67.51 MiB/s [2024-12-09T14:23:42.205Z] 8610.00 IOPS, 67.27 MiB/s [2024-12-09T14:23:43.139Z] 8640.75 IOPS, 67.51 MiB/s [2024-12-09T14:23:44.073Z] 8647.60 IOPS, 67.56 MiB/s [2024-12-09T14:23:45.448Z] 8663.17 IOPS, 67.68 MiB/s [2024-12-09T14:23:46.395Z] 8667.43 IOPS, 67.71 MiB/s [2024-12-09T14:23:47.329Z] 8673.12 IOPS, 67.76 MiB/s [2024-12-09T14:23:48.265Z] 8674.22 IOPS, 67.77 MiB/s [2024-12-09T14:23:48.265Z] 8675.40 IOPS, 67.78 MiB/s 00:30:46.470 Latency(us) 00:30:46.470 [2024-12-09T14:23:48.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:46.470 Verification LBA range: start 0x0 length 0x1000 00:30:46.470 Nvme1n1 : 10.01 8679.59 67.81 0.00 0.00 14705.54 2402.99 21221.18 00:30:46.470 [2024-12-09T14:23:48.265Z] =================================================================================================================== 00:30:46.470 [2024-12-09T14:23:48.265Z] Total : 8679.59 67.81 0.00 0.00 14705.54 2402.99 21221.18 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1644764 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:46.470 { 00:30:46.470 "params": { 00:30:46.470 "name": "Nvme$subsystem", 00:30:46.470 "trtype": "$TEST_TRANSPORT", 00:30:46.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.470 "adrfam": "ipv4", 00:30:46.470 "trsvcid": "$NVMF_PORT", 00:30:46.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.470 "hdgst": ${hdgst:-false}, 00:30:46.470 "ddgst": ${ddgst:-false} 00:30:46.470 }, 00:30:46.470 "method": "bdev_nvme_attach_controller" 00:30:46.470 } 00:30:46.470 EOF 00:30:46.470 )") 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:46.470 [2024-12-09 15:23:48.216465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.470 [2024-12-09 15:23:48.216496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:46.470 15:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:46.470 "params": { 00:30:46.470 "name": "Nvme1", 00:30:46.470 "trtype": "tcp", 00:30:46.470 "traddr": "10.0.0.2", 00:30:46.470 "adrfam": "ipv4", 00:30:46.470 "trsvcid": "4420", 00:30:46.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.470 "hdgst": false, 00:30:46.470 "ddgst": false 00:30:46.470 }, 00:30:46.470 "method": "bdev_nvme_attach_controller" 00:30:46.470 }' 00:30:46.470 [2024-12-09 15:23:48.228431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.470 [2024-12-09 15:23:48.228446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.470 [2024-12-09 15:23:48.240433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.470 [2024-12-09 15:23:48.240446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.470 [2024-12-09 15:23:48.251961] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:30:46.470 [2024-12-09 15:23:48.252004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1644764 ] 00:30:46.470 [2024-12-09 15:23:48.252429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.470 [2024-12-09 15:23:48.252439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.470 [2024-12-09 15:23:48.264430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.470 [2024-12-09 15:23:48.264441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.276428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.276437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.288426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.288436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.300427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.300438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.312427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.312437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.324429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.324441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.324669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.729 [2024-12-09 15:23:48.336433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.336451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.348428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.348438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.360427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.360437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.364603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.729 [2024-12-09 15:23:48.372427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.372443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.384443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.384468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.396435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.396451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.408436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.408450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.420433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.420446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.432439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.432458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.444429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.444440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.456441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.456462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.468433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.468448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.480435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.729 [2024-12-09 15:23:48.480449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.729 [2024-12-09 15:23:48.492435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.730 [2024-12-09 15:23:48.492448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.730 [2024-12-09 15:23:48.504440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.730 [2024-12-09 15:23:48.504452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.730 [2024-12-09 15:23:48.516429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.730 [2024-12-09 15:23:48.516441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.528443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.528459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.540428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.540439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.552429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.552440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.564429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.564439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.576431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.576445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.588429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.588439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.600430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.600446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.612429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.612441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.624431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.624443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.636430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.636443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.648430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.648444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.660445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.660460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.672432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.672451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 Running I/O for 5 seconds... 00:30:46.987 [2024-12-09 15:23:48.685002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.685022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.700714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.700734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.716458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.716478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.729169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.729188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.744515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.744534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.755850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.755869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.987 [2024-12-09 15:23:48.770113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.987 [2024-12-09 15:23:48.770133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.784869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.784888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.796994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.797012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.810372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.810392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.825159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.825178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.840241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.840260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.245 [2024-12-09 15:23:48.854588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.245 [2024-12-09 15:23:48.854607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.869127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.869146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.880993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.881011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.894046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.894066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.908836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.908855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.921298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.921317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.933860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.933880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.944141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.944160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.957967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.957987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.972595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.972614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.983748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.983768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:48.998456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:48.998475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:49.013240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:49.013260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.246 [2024-12-09 15:23:49.027730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.246 [2024-12-09 15:23:49.027749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.042064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.042084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.056591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.056610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.069023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.069041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.081806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.081825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.096742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.096760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.112532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.112551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.124838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.124857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.138225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.138243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.153282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.153300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.168294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.168313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.181142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.181161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.196514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.196537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.209125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.209144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.222170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.222188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.236866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.236886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.251522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.251540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.265823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.265841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.280077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.280096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.504 [2024-12-09 15:23:49.294045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.504 [2024-12-09 15:23:49.294063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.308797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.308815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.323450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.323467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.338761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.338780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.353571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.353589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.368049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.368067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.382167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.382186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.396887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.396905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.412064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.412082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.426665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.426684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.441073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.441091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.456274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.456293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.469875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.469895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.484461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.484480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.496639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.496657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.510187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.510207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.525358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.525377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.540428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.540446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.763 [2024-12-09 15:23:49.553425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.763 [2024-12-09 15:23:49.553444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.568367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.568395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.582416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.582435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.596574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.596593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.609976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.609995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.624295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.624314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.638242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.638280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.652692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.652710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.665775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.665794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.676862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.676881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 16830.00 IOPS, 131.48 MiB/s [2024-12-09T14:23:49.817Z] [2024-12-09 15:23:49.689891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.689910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.700514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.700533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.714251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.714270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.728777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.728795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.745157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.745175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.760664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.760683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.773190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.773215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.786311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.786329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.800828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.800847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.022 [2024-12-09 15:23:49.816873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.022 [2024-12-09 15:23:49.816892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.829031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.829049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.841539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.841557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.853718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.853736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.864720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.864739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.878314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.878332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.892689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.892713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.903601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.903619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.918282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.918302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.932713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.932731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.947720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.947739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.962177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.962196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.976536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.976555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:49.989229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:49.989248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.001627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.001646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.016671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.016694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.027745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.027765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.042584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.042604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.057471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.057490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.281 [2024-12-09 15:23:50.072400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.281 [2024-12-09 15:23:50.072420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.086112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.086132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.101025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.101045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.116105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.116125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.130090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.130112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.145011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.145030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.160214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.160245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.174296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.174315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.189025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.189044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.205298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.205318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.219993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.220012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.232774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.232793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.246251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.246271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.260772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.260791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.273162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.273181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.285866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.285885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.300875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.300894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.316652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.316671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.540 [2024-12-09 15:23:50.330415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.540 [2024-12-09 15:23:50.330436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.345641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.345661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.359918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.359937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.374927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.374946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.389391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.389411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.404589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.404609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.416324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.416344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.430066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.430085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.444650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.444669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.455294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.455314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.470099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.470117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.484727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.484746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.500035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.500054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.798 [2024-12-09 15:23:50.513256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.798 [2024-12-09 15:23:50.513275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.799 [2024-12-09 15:23:50.527888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.799 [2024-12-09 15:23:50.527907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.799 [2024-12-09 15:23:50.539927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.799 [2024-12-09 15:23:50.539946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.799 [2024-12-09 15:23:50.554048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.799 [2024-12-09 15:23:50.554066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.799 [2024-12-09 15:23:50.568377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.799 [2024-12-09 15:23:50.568396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.799 [2024-12-09 15:23:50.581988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.799 [2024-12-09 15:23:50.582007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.597031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.597049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.612479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.612500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.625516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.625535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.640531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.640550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.653824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.653843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.668413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.668432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.681173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.681192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 16828.50 IOPS, 131.47 MiB/s [2024-12-09T14:23:50.852Z] [2024-12-09 15:23:50.694263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.694281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.708985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.709003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.724607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.724626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.738489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.738508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.752723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.752740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.768361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.768381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.782693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.782712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.797061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.797079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.812395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.812414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.826154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.826173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.057 [2024-12-09 15:23:50.840855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.057 [2024-12-09 15:23:50.840873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.856427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.856446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.870359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.870377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.885013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.885031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.900207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.900231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.913965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.913983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.923448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.923466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.937838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.937856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.952805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.952824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.968482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.968501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.981202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.981226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:50.995895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:50.995913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.009881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.009899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.024696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.024715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.036135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.036153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.050350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.050369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.064616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.064634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.078350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.078368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.092756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.092774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.316 [2024-12-09 15:23:51.108484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.316 [2024-12-09 15:23:51.108503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.574 [2024-12-09 15:23:51.122653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.574 [2024-12-09 15:23:51.122672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.137283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.137302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.153023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.153042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.168931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.168949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.184052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.184071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.197197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.197215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.212355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.212374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.225645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.225669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.240605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.240624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.253852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.253871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.268292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.268310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.281561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.281580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.295599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.295618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.310115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.310134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.324294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.324314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.336901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.336920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.350132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.350150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.575 [2024-12-09 15:23:51.364472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.575 [2024-12-09 15:23:51.364491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.376753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.376771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.389746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.389764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.400709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.400727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.414043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.414062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.428357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.428377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.441488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.441506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.455736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.455754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.470159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.470178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.484648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.484671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.497240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.497258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.512228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.512247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.524738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.524758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.537947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.537966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.833 [2024-12-09 15:23:51.552442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.833 [2024-12-09 15:23:51.552461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.834 [2024-12-09 15:23:51.565687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.834 [2024-12-09 15:23:51.565708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.834 [2024-12-09 15:23:51.576942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.834 [2024-12-09 15:23:51.576961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.834 [2024-12-09 15:23:51.590045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.834 [2024-12-09 15:23:51.590064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.834 [2024-12-09 15:23:51.604834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.834 [2024-12-09 15:23:51.604853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.834 [2024-12-09 15:23:51.620317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.834 [2024-12-09 15:23:51.620337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.632665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.632684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.645743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.645762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.656797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.656815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.670571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.670590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.685243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.685261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 16880.33 IOPS, 131.88 MiB/s [2024-12-09T14:23:51.887Z] [2024-12-09 15:23:51.699696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.699715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.713139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.713158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.725765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.725783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.736392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.736415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.750503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.750522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.764544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.764563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.776887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.776906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.789951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.789970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.799790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.799809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.814230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.814249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.828522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.828540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.840854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.840872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.854092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.854111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.868708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.868726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.092 [2024-12-09 15:23:51.881253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.092 [2024-12-09 15:23:51.881273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.896186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.896205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.909866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.909885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.924126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.924145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.937412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.937430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.949771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.949790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.964355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.964376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.976743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.976762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:51.989975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:51.989994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.000529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.000547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.014045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.014063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.028170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.028189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.041480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.041498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.052417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.052435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.066232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.066250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.080766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.080783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.096014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.096033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.109267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.109286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.123974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.123993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.351 [2024-12-09 15:23:52.137062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.351 [2024-12-09 15:23:52.137081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.151836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.151855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.165152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.165170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.180478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.180497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.192945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.192964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.206140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.206158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.220411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.220431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.233357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.233375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.248703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.248721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.263880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.263900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.277875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.277893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.609 [2024-12-09 15:23:52.288607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.609 [2024-12-09 15:23:52.288625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.302280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.302300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.317006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.317025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.332859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.332878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.348337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.348356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.362482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.362501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.376823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.376841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.610 [2024-12-09 15:23:52.391942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.610 [2024-12-09 15:23:52.391960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.406204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.406228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.420388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.420407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.433177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.433195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.448377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.448396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.461346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.461364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.475841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.475860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.490030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.490049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.505015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.505033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.519878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.519897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.533147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.533165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.549062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.549080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.564471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.564490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.575929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.575948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.589957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.589975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.599964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.599982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.614107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.614126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.628596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.628615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.639452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.639471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.869 [2024-12-09 15:23:52.654150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.869 [2024-12-09 15:23:52.654168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.668555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.668575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.678948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.678966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 16940.75 IOPS, 132.35 MiB/s [2024-12-09T14:23:52.922Z] [2024-12-09 15:23:52.693430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.693449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.708154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.708173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.719719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.719738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.734337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.734356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.748942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.748960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.127 [2024-12-09 15:23:52.759923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.127 [2024-12-09 15:23:52.759946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.774633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.774651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.789167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.789186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.801499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.801518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.814166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.814185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.828294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.828312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.841228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.841247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.856019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.856038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.869091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.869109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.882355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.882373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.896660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.896678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.128 [2024-12-09 15:23:52.907948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.128 [2024-12-09 15:23:52.907967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.922690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.922709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.937023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.937041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.949953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.949972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.964158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.964177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.977518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.977538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:52.992441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:52.992460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.005300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.005320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.020302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.020328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.033337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.033357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.047901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.047920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.062291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.062310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.076605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.076623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.087308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.087326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.101969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.101988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.116301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.116322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.129215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.129241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.139510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.139530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.154168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.154187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.168679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.168698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.387 [2024-12-09 15:23:53.179520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.387 [2024-12-09 15:23:53.179539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.193921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.193941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.208020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.208039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.221796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.221816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.236157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.236176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.249492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.249511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.264038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.264057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.277833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.277857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.292532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.292551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.302819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.302839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.317492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.317512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.332303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.332323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.346450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.346469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.645 [2024-12-09 15:23:53.361689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.645 [2024-12-09 15:23:53.361709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.646 [2024-12-09 15:23:53.376500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.646 [2024-12-09 15:23:53.376520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.646 [2024-12-09 15:23:53.389955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.646 [2024-12-09 15:23:53.389974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.646 [2024-12-09 15:23:53.404622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.646 [2024-12-09 15:23:53.404641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.646 [2024-12-09 15:23:53.415731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.646 [2024-12-09 15:23:53.415750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.646 [2024-12-09 15:23:53.430370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.646 [2024-12-09 15:23:53.430388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.444839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.444858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.459852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.459871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.473944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.473964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.483984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.484003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.498262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.498280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.512431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.512450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.524845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.524863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.537853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.537875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.548526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.548544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.561914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.561932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.575942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.575961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.589242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.589261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.603824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.603843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.617385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.617404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.629693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.629712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.640736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.640754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.654671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.654690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.669154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.669173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 [2024-12-09 15:23:53.683389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.683408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:51.904 16976.00 IOPS, 132.62 MiB/s [2024-12-09T14:23:53.699Z] [2024-12-09 15:23:53.697208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:51.904 [2024-12-09 15:23:53.697233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 00:30:52.163 Latency(us) 00:30:52.163 [2024-12-09T14:23:53.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.163 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:52.163 Nvme1n1 : 5.01 16975.29 132.62 0.00 0.00 7532.56 1981.68 13294.45 00:30:52.163 [2024-12-09T14:23:53.958Z] =================================================================================================================== 00:30:52.163 [2024-12-09T14:23:53.958Z] Total : 16975.29 132.62 0.00 0.00 7532.56 1981.68 13294.45 00:30:52.163 [2024-12-09 15:23:53.708433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.708450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.720433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.720447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.732448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.732475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.744436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.744455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.756436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.756452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.768432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.768447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.780435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.780453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.792431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.792445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.804437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.804452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.816429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.816439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.828435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.828448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.840431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.840444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 [2024-12-09 15:23:53.852427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:52.163 [2024-12-09 15:23:53.852438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:52.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1644764) - No such process 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1644764 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:52.163 delay0 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.163 15:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:52.421 [2024-12-09 15:23:54.001474] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:59.115 Initializing NVMe Controllers 00:30:59.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.115 Initialization complete. Launching workers. 00:30:59.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 293, failed: 7382 00:30:59.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7618, failed to submit 57 00:30:59.115 success 7491, unsuccessful 127, failed 0 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.115 rmmod nvme_tcp 00:30:59.115 rmmod nvme_fabrics 00:30:59.115 rmmod nvme_keyring 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1643005 ']' 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1643005 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1643005 ']' 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1643005 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:59.115 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643005 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643005' 00:30:59.116 killing process with pid 1643005 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1643005 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1643005 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.116 15:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.020 00:31:01.020 real 0m31.411s 00:31:01.020 user 0m40.726s 00:31:01.020 sys 0m12.114s 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:01.020 ************************************ 00:31:01.020 END TEST nvmf_zcopy 00:31:01.020 ************************************ 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:01.020 ************************************ 00:31:01.020 START TEST nvmf_nmic 00:31:01.020 ************************************ 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:01.020 * Looking for test storage... 00:31:01.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:01.020 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.021 --rc genhtml_branch_coverage=1 00:31:01.021 --rc genhtml_function_coverage=1 00:31:01.021 --rc genhtml_legend=1 00:31:01.021 --rc geninfo_all_blocks=1 00:31:01.021 --rc geninfo_unexecuted_blocks=1 00:31:01.021 00:31:01.021 ' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.021 --rc genhtml_branch_coverage=1 00:31:01.021 --rc genhtml_function_coverage=1 00:31:01.021 --rc genhtml_legend=1 00:31:01.021 --rc geninfo_all_blocks=1 00:31:01.021 --rc geninfo_unexecuted_blocks=1 00:31:01.021 00:31:01.021 ' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.021 --rc genhtml_branch_coverage=1 00:31:01.021 --rc genhtml_function_coverage=1 00:31:01.021 --rc genhtml_legend=1 00:31:01.021 --rc geninfo_all_blocks=1 00:31:01.021 --rc geninfo_unexecuted_blocks=1 00:31:01.021 00:31:01.021 ' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.021 --rc genhtml_branch_coverage=1 00:31:01.021 --rc genhtml_function_coverage=1 00:31:01.021 --rc genhtml_legend=1 00:31:01.021 --rc geninfo_all_blocks=1 00:31:01.021 --rc geninfo_unexecuted_blocks=1 00:31:01.021 00:31:01.021 ' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.021 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.281 15:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:07.851 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:07.851 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.851 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:07.852 Found net devices under 0000:af:00.0: cvl_0_0 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:07.852 Found net devices under 0000:af:00.1: cvl_0_1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:07.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:31:07.852 00:31:07.852 --- 10.0.0.2 ping statistics --- 00:31:07.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.852 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:07.852 00:31:07.852 --- 10.0.0.1 ping statistics --- 00:31:07.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.852 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1650581 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1650581 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1650581 ']' 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.852 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.852 [2024-12-09 15:24:08.710414] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:07.852 [2024-12-09 15:24:08.711347] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:07.852 [2024-12-09 15:24:08.711385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.852 [2024-12-09 15:24:08.785941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.852 [2024-12-09 15:24:08.827025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.852 [2024-12-09 15:24:08.827063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.852 [2024-12-09 15:24:08.827069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.852 [2024-12-09 15:24:08.827075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.852 [2024-12-09 15:24:08.827080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.852 [2024-12-09 15:24:08.828540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.852 [2024-12-09 15:24:08.828648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.852 [2024-12-09 15:24:08.828748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.852 [2024-12-09 15:24:08.828750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.852 [2024-12-09 15:24:08.897393] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:07.852 [2024-12-09 15:24:08.897731] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:07.852 [2024-12-09 15:24:08.898197] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:07.852 [2024-12-09 15:24:08.898361] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:07.852 [2024-12-09 15:24:08.898423] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 [2024-12-09 15:24:08.977617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 Malloc0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 [2024-12-09 15:24:09.065766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:07.853 test case1: single bdev can't be used in multiple subsystems 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 [2024-12-09 15:24:09.097241] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:07.853 [2024-12-09 15:24:09.097261] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:07.853 [2024-12-09 15:24:09.097269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.853 request: 00:31:07.853 { 00:31:07.853 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:07.853 "namespace": { 00:31:07.853 "bdev_name": "Malloc0", 00:31:07.853 "no_auto_visible": false, 00:31:07.853 "hide_metadata": false 00:31:07.853 }, 00:31:07.853 "method": "nvmf_subsystem_add_ns", 00:31:07.853 "req_id": 1 00:31:07.853 } 00:31:07.853 Got JSON-RPC error response 00:31:07.853 response: 00:31:07.853 { 00:31:07.853 "code": -32602, 00:31:07.853 "message": "Invalid parameters" 00:31:07.853 } 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:07.853 Adding namespace failed - expected result. 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:07.853 test case2: host connect to nvmf target in multiple paths 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 [2024-12-09 15:24:09.109331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:07.853 15:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:10.378 15:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:10.378 [global] 00:31:10.378 thread=1 00:31:10.378 invalidate=1 00:31:10.378 rw=write 00:31:10.378 time_based=1 00:31:10.378 runtime=1 00:31:10.378 ioengine=libaio 00:31:10.378 direct=1 00:31:10.378 bs=4096 00:31:10.378 iodepth=1 00:31:10.378 norandommap=0 00:31:10.378 numjobs=1 00:31:10.378 00:31:10.378 verify_dump=1 00:31:10.378 verify_backlog=512 00:31:10.378 verify_state_save=0 00:31:10.378 do_verify=1 00:31:10.378 verify=crc32c-intel 00:31:10.378 [job0] 00:31:10.378 filename=/dev/nvme0n1 00:31:10.378 Could not set queue depth (nvme0n1) 00:31:10.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:10.378 fio-3.35 00:31:10.378 Starting 1 thread 00:31:11.310 00:31:11.310 job0: (groupid=0, jobs=1): err= 0: pid=1651282: Mon Dec 9 15:24:13 2024 00:31:11.310 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:11.310 slat (nsec): min=7135, max=40719, avg=8050.93, stdev=1347.05 00:31:11.310 clat (usec): min=166, max=409, avg=204.09, stdev=26.34 00:31:11.310 lat (usec): min=183, max=418, avg=212.14, stdev=26.41 00:31:11.310 clat percentiles (usec): 00:31:11.310 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 184], 20.00th=[ 186], 00:31:11.310 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 190], 60.00th=[ 194], 00:31:11.310 | 70.00th=[ 198], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:31:11.310 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 289], 99.95th=[ 379], 00:31:11.310 | 99.99th=[ 408] 00:31:11.310 write: IOPS=2754, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:31:11.310 slat (usec): min=9, max=25920, avg=21.30, stdev=493.43 00:31:11.310 clat (usec): min=122, max=387, avg=137.92, stdev=14.39 00:31:11.310 lat (usec): min=133, max=26131, avg=159.21, stdev=495.05 00:31:11.310 clat percentiles (usec): 00:31:11.310 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:11.310 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:31:11.310 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:31:11.310 | 99.00th=[ 212], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 375], 00:31:11.310 | 99.99th=[ 388] 00:31:11.310 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:11.310 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:11.310 lat (usec) : 250=96.90%, 500=3.10% 00:31:11.310 cpu : usr=5.10%, sys=7.70%, ctx=5321, majf=0, minf=1 00:31:11.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.310 issued rwts: total=2560,2757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:11.310 00:31:11.310 Run status group 0 (all jobs): 00:31:11.310 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:11.310 WRITE: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.3MB), run=1001-1001msec 00:31:11.310 00:31:11.310 Disk stats (read/write): 00:31:11.310 nvme0n1: ios=2240/2560, merge=0/0, ticks=1423/315, in_queue=1738, util=98.50% 00:31:11.310 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:11.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.568 rmmod nvme_tcp 00:31:11.568 rmmod nvme_fabrics 00:31:11.568 rmmod nvme_keyring 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1650581 ']' 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1650581 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1650581 ']' 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1650581 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.568 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650581 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650581' 00:31:11.826 killing process with pid 1650581 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1650581 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1650581 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:11.826 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:11.827 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.827 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.827 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.827 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.827 15:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.358 00:31:14.358 real 0m13.037s 00:31:14.358 user 0m23.705s 00:31:14.358 sys 0m6.127s 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:14.358 ************************************ 00:31:14.358 END TEST nvmf_nmic 00:31:14.358 ************************************ 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.358 ************************************ 00:31:14.358 START TEST nvmf_fio_target 00:31:14.358 ************************************ 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:14.358 * Looking for test storage... 00:31:14.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.358 --rc genhtml_branch_coverage=1 00:31:14.358 --rc genhtml_function_coverage=1 00:31:14.358 --rc genhtml_legend=1 00:31:14.358 --rc geninfo_all_blocks=1 00:31:14.358 --rc geninfo_unexecuted_blocks=1 00:31:14.358 00:31:14.358 ' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.358 --rc genhtml_branch_coverage=1 00:31:14.358 --rc genhtml_function_coverage=1 00:31:14.358 --rc genhtml_legend=1 00:31:14.358 --rc geninfo_all_blocks=1 00:31:14.358 --rc geninfo_unexecuted_blocks=1 00:31:14.358 00:31:14.358 ' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.358 --rc genhtml_branch_coverage=1 00:31:14.358 --rc genhtml_function_coverage=1 00:31:14.358 --rc genhtml_legend=1 00:31:14.358 --rc geninfo_all_blocks=1 00:31:14.358 --rc geninfo_unexecuted_blocks=1 00:31:14.358 00:31:14.358 ' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:14.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.358 --rc genhtml_branch_coverage=1 00:31:14.358 --rc genhtml_function_coverage=1 00:31:14.358 --rc genhtml_legend=1 00:31:14.358 --rc geninfo_all_blocks=1 00:31:14.358 --rc geninfo_unexecuted_blocks=1 00:31:14.358 00:31:14.358 ' 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.358 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.359 15:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:20.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:20.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:20.925 Found net devices under 0000:af:00.0: cvl_0_0 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:20.925 Found net devices under 0000:af:00.1: cvl_0_1 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.925 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:31:20.926 00:31:20.926 --- 10.0.0.2 ping statistics --- 00:31:20.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.926 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:20.926 00:31:20.926 --- 10.0.0.1 ping statistics --- 00:31:20.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.926 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1654910 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1654910 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1654910 ']' 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.926 15:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.926 [2024-12-09 15:24:21.867652] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.926 [2024-12-09 15:24:21.868602] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:20.926 [2024-12-09 15:24:21.868641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.926 [2024-12-09 15:24:21.949548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.926 [2024-12-09 15:24:21.990231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.926 [2024-12-09 15:24:21.990267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.926 [2024-12-09 15:24:21.990274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.926 [2024-12-09 15:24:21.990284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.926 [2024-12-09 15:24:21.990289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.926 [2024-12-09 15:24:21.991756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.926 [2024-12-09 15:24:21.991864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.926 [2024-12-09 15:24:21.991951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.926 [2024-12-09 15:24:21.991953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.926 [2024-12-09 15:24:22.060415] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.926 [2024-12-09 15:24:22.060710] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.926 [2024-12-09 15:24:22.061195] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.926 [2024-12-09 15:24:22.061323] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.926 [2024-12-09 15:24:22.061396] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.926 [2024-12-09 15:24:22.296726] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:20.926 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.185 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:21.185 15:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.444 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:21.444 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.444 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:21.444 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:21.703 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.961 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:21.961 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.219 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:22.219 15:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.219 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:22.219 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:22.476 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:22.734 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:22.734 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.991 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:22.991 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:22.991 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.249 [2024-12-09 15:24:24.940637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.249 15:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:23.508 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:23.765 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:24.023 15:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:25.917 15:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:25.917 [global] 00:31:25.917 thread=1 00:31:25.917 invalidate=1 00:31:25.917 rw=write 00:31:25.917 time_based=1 00:31:25.917 runtime=1 00:31:25.917 ioengine=libaio 00:31:25.917 direct=1 00:31:25.917 bs=4096 00:31:25.917 iodepth=1 00:31:25.917 norandommap=0 00:31:25.917 numjobs=1 00:31:25.917 00:31:25.917 verify_dump=1 00:31:25.917 verify_backlog=512 00:31:25.917 verify_state_save=0 00:31:25.917 do_verify=1 00:31:25.917 verify=crc32c-intel 00:31:25.917 [job0] 00:31:25.917 filename=/dev/nvme0n1 00:31:25.917 [job1] 00:31:25.917 filename=/dev/nvme0n2 00:31:25.917 [job2] 00:31:25.917 filename=/dev/nvme0n3 00:31:25.917 [job3] 00:31:25.917 filename=/dev/nvme0n4 00:31:25.917 Could not set queue depth (nvme0n1) 00:31:25.917 Could not set queue depth (nvme0n2) 00:31:25.917 Could not set queue depth (nvme0n3) 00:31:25.918 Could not set queue depth (nvme0n4) 00:31:26.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.174 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.174 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.174 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:26.174 fio-3.35 00:31:26.174 Starting 4 threads 00:31:27.546 00:31:27.546 job0: (groupid=0, jobs=1): err= 0: pid=1656152: Mon Dec 9 15:24:29 2024 00:31:27.546 read: IOPS=2316, BW=9267KiB/s (9489kB/s)(9276KiB/1001msec) 00:31:27.546 slat (nsec): min=7256, max=31270, avg=8225.63, stdev=1453.80 00:31:27.546 clat (usec): min=189, max=400, avg=218.90, stdev=23.05 00:31:27.546 lat (usec): min=197, max=431, avg=227.12, stdev=23.41 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:31:27.546 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:31:27.546 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 247], 95.00th=[ 273], 00:31:27.546 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 343], 99.95th=[ 367], 00:31:27.546 | 99.99th=[ 400] 00:31:27.546 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:27.546 slat (nsec): min=9473, max=60742, avg=11871.80, stdev=2036.03 00:31:27.546 clat (usec): min=137, max=349, avg=166.76, stdev=19.45 00:31:27.546 lat (usec): min=148, max=360, avg=178.63, stdev=19.82 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:31:27.546 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:31:27.546 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:31:27.546 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 334], 99.95th=[ 334], 00:31:27.546 | 99.99th=[ 351] 00:31:27.546 bw ( KiB/s): min=10984, max=10984, per=35.63%, avg=10984.00, stdev= 0.00, samples=1 00:31:27.546 iops : min= 2746, max= 2746, avg=2746.00, stdev= 0.00, samples=1 00:31:27.546 lat (usec) : 250=95.20%, 500=4.80% 00:31:27.546 cpu : usr=4.60%, sys=7.10%, ctx=4881, majf=0, minf=1 00:31:27.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 issued rwts: total=2319,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.546 job1: (groupid=0, jobs=1): err= 0: pid=1656171: Mon Dec 9 15:24:29 2024 00:31:27.546 read: IOPS=2358, BW=9435KiB/s (9661kB/s)(9444KiB/1001msec) 00:31:27.546 slat (nsec): min=6629, max=29293, avg=7389.26, stdev=714.11 00:31:27.546 clat (usec): min=187, max=482, avg=219.53, stdev=21.10 00:31:27.546 lat (usec): min=194, max=490, avg=226.92, stdev=21.14 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:31:27.546 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 217], 00:31:27.546 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 265], 00:31:27.546 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 424], 99.95th=[ 429], 00:31:27.546 | 99.99th=[ 482] 00:31:27.546 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:27.546 slat (nsec): min=9318, max=36930, avg=10569.59, stdev=994.10 00:31:27.546 clat (usec): min=131, max=328, avg=166.06, stdev=20.51 00:31:27.546 lat (usec): min=141, max=365, avg=176.63, stdev=20.58 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:31:27.546 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:31:27.546 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 196], 95.00th=[ 202], 00:31:27.546 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 289], 00:31:27.546 | 99.99th=[ 330] 00:31:27.546 bw ( KiB/s): min=11120, max=11120, per=36.07%, avg=11120.00, stdev= 0.00, samples=1 00:31:27.546 iops : min= 2780, max= 2780, avg=2780.00, stdev= 0.00, samples=1 00:31:27.546 lat (usec) : 250=96.16%, 500=3.84% 00:31:27.546 cpu : usr=4.00%, sys=3.20%, ctx=4921, majf=0, minf=1 00:31:27.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 issued rwts: total=2361,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.546 job2: (groupid=0, jobs=1): err= 0: pid=1656190: Mon Dec 9 15:24:29 2024 00:31:27.546 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:27.546 slat (nsec): min=7488, max=37320, avg=8597.14, stdev=1303.49 00:31:27.546 clat (usec): min=217, max=490, avg=252.33, stdev=19.60 00:31:27.546 lat (usec): min=226, max=499, avg=260.92, stdev=19.60 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:31:27.546 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:31:27.546 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:31:27.546 | 99.00th=[ 289], 99.50th=[ 347], 99.90th=[ 474], 99.95th=[ 482], 00:31:27.546 | 99.99th=[ 490] 00:31:27.546 write: IOPS=2358, BW=9435KiB/s (9661kB/s)(9444KiB/1001msec); 0 zone resets 00:31:27.546 slat (nsec): min=10951, max=45022, avg=12239.34, stdev=1804.05 00:31:27.546 clat (usec): min=149, max=320, avg=177.80, stdev=12.49 00:31:27.546 lat (usec): min=160, max=365, avg=190.04, stdev=12.90 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:31:27.546 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:31:27.546 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 198], 00:31:27.546 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 310], 99.95th=[ 322], 00:31:27.546 | 99.99th=[ 322] 00:31:27.546 bw ( KiB/s): min= 9072, max= 9072, per=29.42%, avg=9072.00, stdev= 0.00, samples=1 00:31:27.546 iops : min= 2268, max= 2268, avg=2268.00, stdev= 0.00, samples=1 00:31:27.546 lat (usec) : 250=76.59%, 500=23.41% 00:31:27.546 cpu : usr=3.50%, sys=7.50%, ctx=4412, majf=0, minf=1 00:31:27.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 issued rwts: total=2048,2361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.546 job3: (groupid=0, jobs=1): err= 0: pid=1656198: Mon Dec 9 15:24:29 2024 00:31:27.546 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:31:27.546 slat (nsec): min=9048, max=19361, avg=10634.37, stdev=2259.42 00:31:27.546 clat (usec): min=427, max=41105, avg=39285.03, stdev=8277.07 00:31:27.546 lat (usec): min=437, max=41125, avg=39295.67, stdev=8277.11 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 429], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:27.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:27.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:27.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:27.546 | 99.99th=[41157] 00:31:27.546 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:31:27.546 slat (nsec): min=9369, max=63370, avg=11335.19, stdev=2617.85 00:31:27.546 clat (usec): min=143, max=396, avg=168.75, stdev=15.01 00:31:27.546 lat (usec): min=153, max=459, avg=180.08, stdev=16.76 00:31:27.546 clat percentiles (usec): 00:31:27.546 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:31:27.546 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:31:27.546 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 188], 00:31:27.546 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 396], 99.95th=[ 396], 00:31:27.546 | 99.99th=[ 396] 00:31:27.546 bw ( KiB/s): min= 4096, max= 4096, per=13.29%, avg=4096.00, stdev= 0.00, samples=1 00:31:27.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:27.546 lat (usec) : 250=95.15%, 500=0.56% 00:31:27.546 lat (msec) : 50=4.29% 00:31:27.546 cpu : usr=0.48%, sys=0.29%, ctx=537, majf=0, minf=2 00:31:27.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.546 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:27.546 00:31:27.546 Run status group 0 (all jobs): 00:31:27.546 READ: bw=25.4MiB/s (26.7MB/s), 92.6KiB/s-9435KiB/s (94.8kB/s-9661kB/s), io=26.4MiB (27.7MB), run=1001-1037msec 00:31:27.546 WRITE: bw=30.1MiB/s (31.6MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=31.2MiB (32.7MB), run=1001-1037msec 00:31:27.546 00:31:27.546 Disk stats (read/write): 00:31:27.546 nvme0n1: ios=2037/2048, merge=0/0, ticks=651/330, in_queue=981, util=89.28% 00:31:27.546 nvme0n2: ios=2031/2048, merge=0/0, ticks=781/333, in_queue=1114, util=89.18% 00:31:27.546 nvme0n3: ios=1670/2048, merge=0/0, ticks=1344/339, in_queue=1683, util=96.20% 00:31:27.546 nvme0n4: ios=19/512, merge=0/0, ticks=738/86, in_queue=824, util=89.53% 00:31:27.546 15:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:27.546 [global] 00:31:27.546 thread=1 00:31:27.546 invalidate=1 00:31:27.546 rw=randwrite 00:31:27.546 time_based=1 00:31:27.546 runtime=1 00:31:27.546 ioengine=libaio 00:31:27.546 direct=1 00:31:27.546 bs=4096 00:31:27.546 iodepth=1 00:31:27.546 norandommap=0 00:31:27.546 numjobs=1 00:31:27.546 00:31:27.546 verify_dump=1 00:31:27.546 verify_backlog=512 00:31:27.546 verify_state_save=0 00:31:27.546 do_verify=1 00:31:27.546 verify=crc32c-intel 00:31:27.546 [job0] 00:31:27.546 filename=/dev/nvme0n1 00:31:27.546 [job1] 00:31:27.546 filename=/dev/nvme0n2 00:31:27.546 [job2] 00:31:27.546 filename=/dev/nvme0n3 00:31:27.546 [job3] 00:31:27.546 filename=/dev/nvme0n4 00:31:27.546 Could not set queue depth (nvme0n1) 00:31:27.546 Could not set queue depth (nvme0n2) 00:31:27.546 Could not set queue depth (nvme0n3) 00:31:27.546 Could not set queue depth (nvme0n4) 00:31:27.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:27.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:27.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:27.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:27.804 fio-3.35 00:31:27.804 Starting 4 threads 00:31:29.174 00:31:29.174 job0: (groupid=0, jobs=1): err= 0: pid=1656596: Mon Dec 9 15:24:30 2024 00:31:29.174 read: IOPS=155, BW=623KiB/s (638kB/s)(624KiB/1001msec) 00:31:29.174 slat (nsec): min=7941, max=24368, avg=10013.62, stdev=3341.82 00:31:29.174 clat (usec): min=200, max=41970, avg=5727.69, stdev=13980.08 00:31:29.174 lat (usec): min=208, max=41982, avg=5737.71, stdev=13982.33 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:31:29.174 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 245], 00:31:29.174 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[41157], 95.00th=[41157], 00:31:29.174 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:29.174 | 99.99th=[42206] 00:31:29.174 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:29.174 slat (nsec): min=10063, max=47802, avg=12201.35, stdev=2616.61 00:31:29.174 clat (usec): min=137, max=272, avg=188.93, stdev=17.08 00:31:29.174 lat (usec): min=149, max=320, avg=201.13, stdev=17.31 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:31:29.174 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:31:29.174 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 217], 00:31:29.174 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 273], 99.95th=[ 273], 00:31:29.174 | 99.99th=[ 273] 00:31:29.174 bw ( KiB/s): min= 4096, max= 4096, per=18.45%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.174 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.174 lat (usec) : 250=94.31%, 500=2.54% 00:31:29.174 lat (msec) : 50=3.14% 00:31:29.174 cpu : usr=0.50%, sys=0.60%, ctx=669, majf=0, minf=1 00:31:29.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=156,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.174 job1: (groupid=0, jobs=1): err= 0: pid=1656597: Mon Dec 9 15:24:30 2024 00:31:29.174 read: IOPS=2297, BW=9191KiB/s (9411kB/s)(9200KiB/1001msec) 00:31:29.174 slat (nsec): min=6801, max=49561, avg=8438.66, stdev=2056.12 00:31:29.174 clat (usec): min=173, max=3006, avg=226.81, stdev=66.59 00:31:29.174 lat (usec): min=181, max=3056, avg=235.25, stdev=67.46 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 204], 00:31:29.174 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 237], 00:31:29.174 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 258], 00:31:29.174 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 478], 99.95th=[ 1074], 00:31:29.174 | 99.99th=[ 2999] 00:31:29.174 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:29.174 slat (nsec): min=9423, max=46167, avg=12037.68, stdev=2711.98 00:31:29.174 clat (usec): min=119, max=2642, avg=161.66, stdev=61.86 00:31:29.174 lat (usec): min=132, max=2678, avg=173.70, stdev=62.49 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 137], 00:31:29.174 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:31:29.174 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 237], 95.00th=[ 241], 00:31:29.174 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 396], 99.95th=[ 1090], 00:31:29.174 | 99.99th=[ 2638] 00:31:29.174 bw ( KiB/s): min=10048, max=10048, per=45.27%, avg=10048.00, stdev= 0.00, samples=1 00:31:29.174 iops : min= 2512, max= 2512, avg=2512.00, stdev= 0.00, samples=1 00:31:29.174 lat (usec) : 250=94.32%, 500=5.60% 00:31:29.174 lat (msec) : 2=0.04%, 4=0.04% 00:31:29.174 cpu : usr=2.50%, sys=5.80%, ctx=4861, majf=0, minf=1 00:31:29.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.174 job2: (groupid=0, jobs=1): err= 0: pid=1656598: Mon Dec 9 15:24:30 2024 00:31:29.174 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:29.174 slat (nsec): min=10011, max=24605, avg=21266.41, stdev=3725.71 00:31:29.174 clat (usec): min=40873, max=41114, avg=40965.01, stdev=66.84 00:31:29.174 lat (usec): min=40896, max=41128, avg=40986.28, stdev=64.89 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:29.174 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:29.174 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:29.174 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:29.174 | 99.99th=[41157] 00:31:29.174 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:29.174 slat (nsec): min=9866, max=46840, avg=11170.37, stdev=2290.47 00:31:29.174 clat (usec): min=154, max=325, avg=190.39, stdev=17.87 00:31:29.174 lat (usec): min=166, max=337, avg=201.56, stdev=18.28 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:31:29.174 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:31:29.174 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 221], 00:31:29.174 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 326], 99.95th=[ 326], 00:31:29.174 | 99.99th=[ 326] 00:31:29.174 bw ( KiB/s): min= 4096, max= 4096, per=18.45%, avg=4096.00, stdev= 0.00, samples=1 00:31:29.174 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:29.174 lat (usec) : 250=95.13%, 500=0.75% 00:31:29.174 lat (msec) : 50=4.12% 00:31:29.174 cpu : usr=0.40%, sys=0.89%, ctx=534, majf=0, minf=2 00:31:29.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.175 job3: (groupid=0, jobs=1): err= 0: pid=1656599: Mon Dec 9 15:24:30 2024 00:31:29.175 read: IOPS=1890, BW=7563KiB/s (7744kB/s)(7676KiB/1015msec) 00:31:29.175 slat (nsec): min=6857, max=31225, avg=8390.12, stdev=1648.35 00:31:29.175 clat (usec): min=190, max=41088, avg=318.56, stdev=1852.64 00:31:29.175 lat (usec): min=201, max=41099, avg=326.95, stdev=1852.72 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:31:29.175 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:31:29.175 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 285], 00:31:29.175 | 99.00th=[ 318], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41157], 00:31:29.175 | 99.99th=[41157] 00:31:29.175 write: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec); 0 zone resets 00:31:29.175 slat (nsec): min=9237, max=40594, avg=11547.97, stdev=2199.85 00:31:29.175 clat (usec): min=139, max=3633, avg=171.43, stdev=78.11 00:31:29.175 lat (usec): min=150, max=3644, avg=182.98, stdev=78.21 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:31:29.175 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:31:29.175 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 192], 00:31:29.175 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 269], 99.95th=[ 297], 00:31:29.175 | 99.99th=[ 3621] 00:31:29.175 bw ( KiB/s): min= 5984, max=10400, per=36.91%, avg=8192.00, stdev=3122.58, samples=2 00:31:29.175 iops : min= 1496, max= 2600, avg=2048.00, stdev=780.65, samples=2 00:31:29.175 lat (usec) : 250=92.29%, 500=7.56%, 750=0.03% 00:31:29.175 lat (msec) : 4=0.03%, 50=0.10% 00:31:29.175 cpu : usr=3.16%, sys=5.33%, ctx=3967, majf=0, minf=2 00:31:29.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=1919,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:29.175 00:31:29.175 Run status group 0 (all jobs): 00:31:29.175 READ: bw=16.9MiB/s (17.7MB/s), 87.4KiB/s-9191KiB/s (89.5kB/s-9411kB/s), io=17.2MiB (18.0MB), run=1001-1015msec 00:31:29.175 WRITE: bw=21.7MiB/s (22.7MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1015msec 00:31:29.175 00:31:29.175 Disk stats (read/write): 00:31:29.175 nvme0n1: ios=46/512, merge=0/0, ticks=1726/92, in_queue=1818, util=97.39% 00:31:29.175 nvme0n2: ios=1957/2048, merge=0/0, ticks=1430/340, in_queue=1770, util=97.35% 00:31:29.175 nvme0n3: ios=18/512, merge=0/0, ticks=738/91, in_queue=829, util=88.78% 00:31:29.175 nvme0n4: ios=1899/2048, merge=0/0, ticks=418/323, in_queue=741, util=89.63% 00:31:29.175 15:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:29.175 [global] 00:31:29.175 thread=1 00:31:29.175 invalidate=1 00:31:29.175 rw=write 00:31:29.175 time_based=1 00:31:29.175 runtime=1 00:31:29.175 ioengine=libaio 00:31:29.175 direct=1 00:31:29.175 bs=4096 00:31:29.175 iodepth=128 00:31:29.175 norandommap=0 00:31:29.175 numjobs=1 00:31:29.175 00:31:29.175 verify_dump=1 00:31:29.175 verify_backlog=512 00:31:29.175 verify_state_save=0 00:31:29.175 do_verify=1 00:31:29.175 verify=crc32c-intel 00:31:29.175 [job0] 00:31:29.175 filename=/dev/nvme0n1 00:31:29.175 [job1] 00:31:29.175 filename=/dev/nvme0n2 00:31:29.175 [job2] 00:31:29.175 filename=/dev/nvme0n3 00:31:29.175 [job3] 00:31:29.175 filename=/dev/nvme0n4 00:31:29.175 Could not set queue depth (nvme0n1) 00:31:29.175 Could not set queue depth (nvme0n2) 00:31:29.175 Could not set queue depth (nvme0n3) 00:31:29.175 Could not set queue depth (nvme0n4) 00:31:29.432 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.432 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.432 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.432 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:29.432 fio-3.35 00:31:29.432 Starting 4 threads 00:31:30.805 00:31:30.805 job0: (groupid=0, jobs=1): err= 0: pid=1656962: Mon Dec 9 15:24:32 2024 00:31:30.805 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:31:30.805 slat (nsec): min=1095, max=19798k, avg=86828.06, stdev=743969.08 00:31:30.805 clat (usec): min=3481, max=49945, avg=12197.10, stdev=5855.11 00:31:30.805 lat (usec): min=3488, max=49950, avg=12283.93, stdev=5911.27 00:31:30.805 clat percentiles (usec): 00:31:30.805 | 1.00th=[ 4293], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:30.805 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10683], 00:31:30.805 | 70.00th=[11076], 80.00th=[11994], 90.00th=[19006], 95.00th=[25297], 00:31:30.805 | 99.00th=[36963], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:31:30.805 | 99.99th=[50070] 00:31:30.805 write: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1008msec); 0 zone resets 00:31:30.805 slat (nsec): min=1798, max=13728k, avg=103755.44, stdev=744161.74 00:31:30.805 clat (usec): min=1064, max=120052, avg=14134.41, stdev=15484.35 00:31:30.805 lat (usec): min=1074, max=121449, avg=14238.16, stdev=15574.11 00:31:30.805 clat percentiles (msec): 00:31:30.805 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:31:30.805 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:31:30.805 | 70.00th=[ 11], 80.00th=[ 14], 90.00th=[ 21], 95.00th=[ 40], 00:31:30.805 | 99.00th=[ 104], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:31:30.805 | 99.99th=[ 121] 00:31:30.805 bw ( KiB/s): min=16384, max=23224, per=26.35%, avg=19804.00, stdev=4836.61, samples=2 00:31:30.805 iops : min= 4096, max= 5806, avg=4951.00, stdev=1209.15, samples=2 00:31:30.805 lat (msec) : 2=0.36%, 4=0.78%, 10=41.49%, 20=47.99%, 50=7.20% 00:31:30.805 lat (msec) : 100=1.61%, 250=0.57% 00:31:30.805 cpu : usr=3.18%, sys=4.77%, ctx=371, majf=0, minf=1 00:31:30.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:30.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.805 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.805 job1: (groupid=0, jobs=1): err= 0: pid=1656963: Mon Dec 9 15:24:32 2024 00:31:30.805 read: IOPS=5569, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1009msec) 00:31:30.805 slat (nsec): min=1673, max=22481k, avg=87782.65, stdev=806071.44 00:31:30.805 clat (usec): min=3851, max=42528, avg=12489.95, stdev=4852.31 00:31:30.805 lat (usec): min=4659, max=42535, avg=12577.73, stdev=4904.60 00:31:30.805 clat percentiles (usec): 00:31:30.805 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8848], 00:31:30.805 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11863], 60.00th=[13566], 00:31:30.805 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16450], 95.00th=[17433], 00:31:30.805 | 99.00th=[35390], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:31:30.805 | 99.99th=[42730] 00:31:30.805 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:31:30.805 slat (usec): min=2, max=10970, avg=77.41, stdev=639.16 00:31:30.805 clat (usec): min=1058, max=22675, avg=10231.61, stdev=2937.30 00:31:30.805 lat (usec): min=1086, max=22698, avg=10309.02, stdev=2978.87 00:31:30.805 clat percentiles (usec): 00:31:30.806 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 8225], 00:31:30.806 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:31:30.806 | 70.00th=[11338], 80.00th=[12649], 90.00th=[13566], 95.00th=[15533], 00:31:30.806 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20841], 00:31:30.806 | 99.99th=[22676] 00:31:30.806 bw ( KiB/s): min=20480, max=24576, per=29.97%, avg=22528.00, stdev=2896.31, samples=2 00:31:30.806 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:31:30.806 lat (msec) : 2=0.07%, 4=0.07%, 10=49.41%, 20=48.06%, 50=2.38% 00:31:30.806 cpu : usr=4.37%, sys=8.83%, ctx=254, majf=0, minf=1 00:31:30.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:30.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.806 issued rwts: total=5620,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.806 job2: (groupid=0, jobs=1): err= 0: pid=1656964: Mon Dec 9 15:24:32 2024 00:31:30.806 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:31:30.806 slat (nsec): min=1482, max=15236k, avg=101810.93, stdev=741372.88 00:31:30.806 clat (usec): min=5788, max=37367, avg=13453.12, stdev=4693.65 00:31:30.806 lat (usec): min=5800, max=37374, avg=13554.93, stdev=4736.15 00:31:30.806 clat percentiles (usec): 00:31:30.806 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:31:30.806 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:31:30.806 | 70.00th=[13566], 80.00th=[15664], 90.00th=[20579], 95.00th=[22152], 00:31:30.806 | 99.00th=[31065], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:31:30.806 | 99.99th=[37487] 00:31:30.806 write: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec); 0 zone resets 00:31:30.806 slat (usec): min=2, max=15779, avg=126.60, stdev=788.61 00:31:30.806 clat (usec): min=266, max=114226, avg=17283.68, stdev=18375.18 00:31:30.806 lat (usec): min=823, max=114241, avg=17410.28, stdev=18499.76 00:31:30.806 clat percentiles (msec): 00:31:30.806 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:31:30.806 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:31:30.806 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 33], 95.00th=[ 53], 00:31:30.806 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:31:30.806 | 99.99th=[ 114] 00:31:30.806 bw ( KiB/s): min=13680, max=19088, per=21.80%, avg=16384.00, stdev=3824.03, samples=2 00:31:30.806 iops : min= 3420, max= 4772, avg=4096.00, stdev=956.01, samples=2 00:31:30.806 lat (usec) : 500=0.01%, 1000=0.08% 00:31:30.806 lat (msec) : 2=0.21%, 4=1.36%, 10=13.91%, 20=71.19%, 50=10.56% 00:31:30.806 lat (msec) : 100=1.71%, 250=0.97% 00:31:30.806 cpu : usr=3.60%, sys=5.79%, ctx=389, majf=0, minf=1 00:31:30.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:30.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.806 issued rwts: total=4096,4156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.806 job3: (groupid=0, jobs=1): err= 0: pid=1656966: Mon Dec 9 15:24:32 2024 00:31:30.806 read: IOPS=3947, BW=15.4MiB/s (16.2MB/s)(15.6MiB/1009msec) 00:31:30.806 slat (nsec): min=1560, max=25737k, avg=134744.41, stdev=1124971.91 00:31:30.806 clat (usec): min=728, max=66543, avg=17341.26, stdev=11993.15 00:31:30.806 lat (usec): min=6912, max=66569, avg=17476.01, stdev=12092.77 00:31:30.806 clat percentiles (usec): 00:31:30.806 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:31:30.806 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12125], 60.00th=[13042], 00:31:30.806 | 70.00th=[13829], 80.00th=[26608], 90.00th=[38011], 95.00th=[45351], 00:31:30.806 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[61604], 00:31:30.806 | 99.99th=[66323] 00:31:30.806 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:31:30.806 slat (usec): min=2, max=22837, avg=107.32, stdev=834.86 00:31:30.806 clat (usec): min=5598, max=57723, avg=14243.47, stdev=7298.26 00:31:30.806 lat (usec): min=5616, max=57756, avg=14350.79, stdev=7387.36 00:31:30.806 clat percentiles (usec): 00:31:30.806 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[10814], 20.00th=[11076], 00:31:30.806 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:31:30.806 | 70.00th=[12256], 80.00th=[12780], 90.00th=[23200], 95.00th=[34866], 00:31:30.806 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[53740], 00:31:30.806 | 99.99th=[57934] 00:31:30.806 bw ( KiB/s): min=10392, max=22376, per=21.80%, avg=16384.00, stdev=8473.97, samples=2 00:31:30.806 iops : min= 2598, max= 5594, avg=4096.00, stdev=2118.49, samples=2 00:31:30.806 lat (usec) : 750=0.01% 00:31:30.806 lat (msec) : 10=9.72%, 20=72.88%, 50=15.93%, 100=1.46% 00:31:30.806 cpu : usr=4.27%, sys=5.95%, ctx=249, majf=0, minf=1 00:31:30.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:30.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.806 issued rwts: total=3983,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.806 00:31:30.806 Run status group 0 (all jobs): 00:31:30.806 READ: bw=70.9MiB/s (74.3MB/s), 15.4MiB/s-21.8MiB/s (16.2MB/s-22.8MB/s), io=71.5MiB (75.0MB), run=1002-1009msec 00:31:30.806 WRITE: bw=73.4MiB/s (77.0MB/s), 15.9MiB/s-21.8MiB/s (16.6MB/s-22.9MB/s), io=74.1MiB (77.7MB), run=1002-1009msec 00:31:30.806 00:31:30.806 Disk stats (read/write): 00:31:30.806 nvme0n1: ios=3634/3958, merge=0/0, ticks=32588/37433, in_queue=70021, util=86.17% 00:31:30.806 nvme0n2: ios=4640/5059, merge=0/0, ticks=53414/46093, in_queue=99507, util=100.00% 00:31:30.806 nvme0n3: ios=3115/3370, merge=0/0, ticks=35835/54524, in_queue=90359, util=96.86% 00:31:30.806 nvme0n4: ios=3602/3884, merge=0/0, ticks=27927/23131, in_queue=51058, util=96.51% 00:31:30.806 15:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:30.806 [global] 00:31:30.806 thread=1 00:31:30.806 invalidate=1 00:31:30.806 rw=randwrite 00:31:30.806 time_based=1 00:31:30.806 runtime=1 00:31:30.806 ioengine=libaio 00:31:30.806 direct=1 00:31:30.806 bs=4096 00:31:30.806 iodepth=128 00:31:30.806 norandommap=0 00:31:30.806 numjobs=1 00:31:30.806 00:31:30.806 verify_dump=1 00:31:30.806 verify_backlog=512 00:31:30.806 verify_state_save=0 00:31:30.806 do_verify=1 00:31:30.806 verify=crc32c-intel 00:31:30.806 [job0] 00:31:30.806 filename=/dev/nvme0n1 00:31:30.806 [job1] 00:31:30.806 filename=/dev/nvme0n2 00:31:30.806 [job2] 00:31:30.806 filename=/dev/nvme0n3 00:31:30.806 [job3] 00:31:30.806 filename=/dev/nvme0n4 00:31:30.806 Could not set queue depth (nvme0n1) 00:31:30.806 Could not set queue depth (nvme0n2) 00:31:30.806 Could not set queue depth (nvme0n3) 00:31:30.806 Could not set queue depth (nvme0n4) 00:31:31.063 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.063 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.063 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.063 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:31.063 fio-3.35 00:31:31.063 Starting 4 threads 00:31:32.451 00:31:32.451 job0: (groupid=0, jobs=1): err= 0: pid=1657334: Mon Dec 9 15:24:33 2024 00:31:32.451 read: IOPS=4873, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1004msec) 00:31:32.451 slat (nsec): min=1165, max=9008.6k, avg=101121.08, stdev=618009.58 00:31:32.451 clat (usec): min=702, max=32169, avg=12722.26, stdev=4422.62 00:31:32.451 lat (usec): min=4666, max=32195, avg=12823.38, stdev=4461.75 00:31:32.451 clat percentiles (usec): 00:31:32.451 | 1.00th=[ 5080], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[ 9241], 00:31:32.451 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11469], 60.00th=[12780], 00:31:32.451 | 70.00th=[13698], 80.00th=[15664], 90.00th=[18482], 95.00th=[21627], 00:31:32.451 | 99.00th=[27132], 99.50th=[27657], 99.90th=[28967], 99.95th=[29230], 00:31:32.451 | 99.99th=[32113] 00:31:32.451 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:31:32.451 slat (nsec): min=1863, max=13249k, avg=94126.05, stdev=543234.05 00:31:32.451 clat (usec): min=3746, max=55965, avg=12398.34, stdev=6865.19 00:31:32.451 lat (usec): min=3753, max=55973, avg=12492.47, stdev=6913.30 00:31:32.451 clat percentiles (usec): 00:31:32.451 | 1.00th=[ 4883], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8979], 00:31:32.451 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:31:32.451 | 70.00th=[10814], 80.00th=[15270], 90.00th=[20317], 95.00th=[23462], 00:31:32.451 | 99.00th=[43779], 99.50th=[47973], 99.90th=[54264], 99.95th=[55837], 00:31:32.451 | 99.99th=[55837] 00:31:32.451 bw ( KiB/s): min=19144, max=21816, per=29.62%, avg=20480.00, stdev=1889.39, samples=2 00:31:32.451 iops : min= 4786, max= 5454, avg=5120.00, stdev=472.35, samples=2 00:31:32.451 lat (usec) : 750=0.01% 00:31:32.451 lat (msec) : 4=0.19%, 10=39.22%, 20=50.88%, 50=9.58%, 100=0.12% 00:31:32.451 cpu : usr=2.89%, sys=5.18%, ctx=441, majf=0, minf=1 00:31:32.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:32.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.451 issued rwts: total=4893,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.451 job1: (groupid=0, jobs=1): err= 0: pid=1657335: Mon Dec 9 15:24:33 2024 00:31:32.451 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:31:32.451 slat (nsec): min=1638, max=10002k, avg=105717.22, stdev=698979.48 00:31:32.451 clat (usec): min=6545, max=32246, avg=13943.28, stdev=4316.08 00:31:32.451 lat (usec): min=6552, max=36587, avg=14049.00, stdev=4366.95 00:31:32.451 clat percentiles (usec): 00:31:32.451 | 1.00th=[ 7635], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10421], 00:31:32.451 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[13566], 00:31:32.451 | 70.00th=[15401], 80.00th=[18220], 90.00th=[20841], 95.00th=[22938], 00:31:32.451 | 99.00th=[24511], 99.50th=[25822], 99.90th=[31065], 99.95th=[32113], 00:31:32.451 | 99.99th=[32375] 00:31:32.451 write: IOPS=4393, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1005msec); 0 zone resets 00:31:32.451 slat (nsec): min=1959, max=7608.3k, avg=122326.18, stdev=685014.78 00:31:32.451 clat (usec): min=3134, max=50995, avg=15932.86, stdev=9050.39 00:31:32.451 lat (usec): min=5062, max=51007, avg=16055.18, stdev=9119.93 00:31:32.451 clat percentiles (usec): 00:31:32.451 | 1.00th=[ 6521], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10028], 00:31:32.451 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12256], 60.00th=[13435], 00:31:32.451 | 70.00th=[14746], 80.00th=[17695], 90.00th=[30540], 95.00th=[36963], 00:31:32.451 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:31:32.451 | 99.99th=[51119] 00:31:32.451 bw ( KiB/s): min=16072, max=18232, per=24.80%, avg=17152.00, stdev=1527.35, samples=2 00:31:32.452 iops : min= 4018, max= 4558, avg=4288.00, stdev=381.84, samples=2 00:31:32.452 lat (msec) : 4=0.02%, 10=15.40%, 20=69.72%, 50=14.71%, 100=0.14% 00:31:32.452 cpu : usr=3.39%, sys=5.98%, ctx=297, majf=0, minf=1 00:31:32.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:32.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.452 issued rwts: total=4096,4415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.452 job2: (groupid=0, jobs=1): err= 0: pid=1657336: Mon Dec 9 15:24:33 2024 00:31:32.452 read: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1004msec) 00:31:32.452 slat (nsec): min=1288, max=11557k, avg=136785.37, stdev=772047.27 00:31:32.452 clat (usec): min=1213, max=41521, avg=17185.90, stdev=6673.82 00:31:32.452 lat (usec): min=5982, max=43019, avg=17322.68, stdev=6742.01 00:31:32.452 clat percentiles (usec): 00:31:32.452 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[10814], 20.00th=[11863], 00:31:32.452 | 30.00th=[12911], 40.00th=[14091], 50.00th=[14746], 60.00th=[16450], 00:31:32.452 | 70.00th=[18744], 80.00th=[22152], 90.00th=[28181], 95.00th=[31065], 00:31:32.452 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[41157], 00:31:32.452 | 99.99th=[41681] 00:31:32.452 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:31:32.452 slat (nsec): min=1921, max=7850.7k, avg=138000.50, stdev=688026.30 00:31:32.452 clat (usec): min=1443, max=48198, avg=18166.99, stdev=10117.89 00:31:32.452 lat (usec): min=1452, max=48215, avg=18304.99, stdev=10184.96 00:31:32.452 clat percentiles (usec): 00:31:32.452 | 1.00th=[ 6325], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11207], 00:31:32.452 | 30.00th=[11731], 40.00th=[12780], 50.00th=[14091], 60.00th=[16188], 00:31:32.452 | 70.00th=[17957], 80.00th=[21627], 90.00th=[37487], 95.00th=[41157], 00:31:32.452 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:31:32.452 | 99.99th=[47973] 00:31:32.452 bw ( KiB/s): min=12240, max=16432, per=20.73%, avg=14336.00, stdev=2964.19, samples=2 00:31:32.452 iops : min= 3060, max= 4108, avg=3584.00, stdev=741.05, samples=2 00:31:32.452 lat (msec) : 2=0.06%, 4=0.35%, 10=4.92%, 20=71.25%, 50=23.42% 00:31:32.452 cpu : usr=1.79%, sys=4.49%, ctx=330, majf=0, minf=1 00:31:32.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:32.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.452 issued rwts: total=3568,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.452 job3: (groupid=0, jobs=1): err= 0: pid=1657337: Mon Dec 9 15:24:33 2024 00:31:32.452 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:31:32.452 slat (nsec): min=1123, max=8809.9k, avg=110751.42, stdev=693634.02 00:31:32.452 clat (usec): min=4169, max=33761, avg=13874.86, stdev=4082.42 00:31:32.452 lat (usec): min=4176, max=33769, avg=13985.61, stdev=4124.84 00:31:32.452 clat percentiles (usec): 00:31:32.452 | 1.00th=[ 5211], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[10814], 00:31:32.452 | 30.00th=[11207], 40.00th=[12518], 50.00th=[13173], 60.00th=[14353], 00:31:32.452 | 70.00th=[15795], 80.00th=[16909], 90.00th=[19268], 95.00th=[21103], 00:31:32.452 | 99.00th=[25297], 99.50th=[28705], 99.90th=[33817], 99.95th=[33817], 00:31:32.452 | 99.99th=[33817] 00:31:32.452 write: IOPS=4242, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1003msec); 0 zone resets 00:31:32.452 slat (nsec): min=1986, max=8289.6k, avg=122604.04, stdev=610511.45 00:31:32.452 clat (usec): min=543, max=49691, avg=16540.95, stdev=9962.66 00:31:32.452 lat (usec): min=1222, max=49701, avg=16663.55, stdev=10032.44 00:31:32.452 clat percentiles (usec): 00:31:32.452 | 1.00th=[ 4178], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[11076], 00:31:32.452 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12518], 60.00th=[14091], 00:31:32.452 | 70.00th=[16057], 80.00th=[18220], 90.00th=[35914], 95.00th=[42730], 00:31:32.452 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:31:32.452 | 99.99th=[49546] 00:31:32.452 bw ( KiB/s): min=15632, max=17384, per=23.87%, avg=16508.00, stdev=1238.85, samples=2 00:31:32.452 iops : min= 3908, max= 4346, avg=4127.00, stdev=309.71, samples=2 00:31:32.452 lat (usec) : 750=0.01% 00:31:32.452 lat (msec) : 2=0.20%, 4=0.29%, 10=10.72%, 20=76.59%, 50=12.19% 00:31:32.452 cpu : usr=2.30%, sys=3.89%, ctx=516, majf=0, minf=2 00:31:32.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:32.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.452 issued rwts: total=4096,4255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.452 00:31:32.452 Run status group 0 (all jobs): 00:31:32.452 READ: bw=64.7MiB/s (67.9MB/s), 13.9MiB/s-19.0MiB/s (14.6MB/s-20.0MB/s), io=65.1MiB (68.2MB), run=1003-1005msec 00:31:32.452 WRITE: bw=67.5MiB/s (70.8MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=67.9MiB (71.2MB), run=1003-1005msec 00:31:32.452 00:31:32.452 Disk stats (read/write): 00:31:32.452 nvme0n1: ios=3915/4096, merge=0/0, ticks=21489/21490, in_queue=42979, util=84.97% 00:31:32.452 nvme0n2: ios=3688/4096, merge=0/0, ticks=24138/27635, in_queue=51773, util=89.23% 00:31:32.452 nvme0n3: ios=2862/3072, merge=0/0, ticks=14735/17809, in_queue=32544, util=93.52% 00:31:32.452 nvme0n4: ios=3095/3583, merge=0/0, ticks=23384/31330, in_queue=54714, util=94.20% 00:31:32.452 15:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:32.452 15:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1657562 00:31:32.452 15:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:32.452 15:24:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:32.452 [global] 00:31:32.452 thread=1 00:31:32.452 invalidate=1 00:31:32.452 rw=read 00:31:32.452 time_based=1 00:31:32.452 runtime=10 00:31:32.452 ioengine=libaio 00:31:32.452 direct=1 00:31:32.452 bs=4096 00:31:32.452 iodepth=1 00:31:32.452 norandommap=1 00:31:32.452 numjobs=1 00:31:32.452 00:31:32.452 [job0] 00:31:32.452 filename=/dev/nvme0n1 00:31:32.452 [job1] 00:31:32.452 filename=/dev/nvme0n2 00:31:32.452 [job2] 00:31:32.452 filename=/dev/nvme0n3 00:31:32.452 [job3] 00:31:32.452 filename=/dev/nvme0n4 00:31:32.452 Could not set queue depth (nvme0n1) 00:31:32.452 Could not set queue depth (nvme0n2) 00:31:32.452 Could not set queue depth (nvme0n3) 00:31:32.452 Could not set queue depth (nvme0n4) 00:31:32.709 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.709 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.709 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.709 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:32.709 fio-3.35 00:31:32.709 Starting 4 threads 00:31:35.224 15:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:35.480 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:35.480 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26394624, buflen=4096 00:31:35.480 fio: pid=1657705, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:35.736 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.736 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:35.736 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6160384, buflen=4096 00:31:35.736 fio: pid=1657704, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:35.993 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45449216, buflen=4096 00:31:35.993 fio: pid=1657702, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:35.993 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:35.993 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:36.250 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52891648, buflen=4096 00:31:36.250 fio: pid=1657703, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:36.250 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.250 15:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:36.250 00:31:36.250 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1657702: Mon Dec 9 15:24:37 2024 00:31:36.250 read: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(43.3MiB/3150msec) 00:31:36.250 slat (usec): min=5, max=16583, avg=12.33, stdev=284.65 00:31:36.250 clat (usec): min=167, max=41299, avg=268.27, stdev=1092.30 00:31:36.250 lat (usec): min=185, max=41305, avg=280.60, stdev=1129.25 00:31:36.250 clat percentiles (usec): 00:31:36.250 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:31:36.250 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 235], 00:31:36.250 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 318], 00:31:36.250 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 437], 99.95th=[40633], 00:31:36.250 | 99.99th=[41157] 00:31:36.250 bw ( KiB/s): min= 9824, max=17688, per=37.26%, avg=14099.33, stdev=2804.29, samples=6 00:31:36.250 iops : min= 2456, max= 4422, avg=3524.83, stdev=701.07, samples=6 00:31:36.250 lat (usec) : 250=70.60%, 500=29.30%, 750=0.02% 00:31:36.250 lat (msec) : 50=0.07% 00:31:36.250 cpu : usr=1.02%, sys=3.02%, ctx=11101, majf=0, minf=1 00:31:36.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 issued rwts: total=11097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.250 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1657703: Mon Dec 9 15:24:37 2024 00:31:36.250 read: IOPS=3822, BW=14.9MiB/s (15.7MB/s)(50.4MiB/3378msec) 00:31:36.250 slat (usec): min=6, max=33105, avg=14.31, stdev=364.04 00:31:36.250 clat (usec): min=164, max=9732, avg=243.73, stdev=87.11 00:31:36.250 lat (usec): min=182, max=33390, avg=258.04, stdev=375.13 00:31:36.250 clat percentiles (usec): 00:31:36.250 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 212], 20.00th=[ 239], 00:31:36.250 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:31:36.250 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 265], 00:31:36.250 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 490], 99.95th=[ 506], 00:31:36.250 | 99.99th=[ 553] 00:31:36.250 bw ( KiB/s): min=14067, max=15496, per=40.15%, avg=15192.50, stdev=564.08, samples=6 00:31:36.250 iops : min= 3516, max= 3874, avg=3798.00, stdev=141.32, samples=6 00:31:36.250 lat (usec) : 250=67.43%, 500=32.50%, 750=0.05% 00:31:36.250 lat (msec) : 10=0.01% 00:31:36.250 cpu : usr=2.40%, sys=5.95%, ctx=12919, majf=0, minf=2 00:31:36.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 issued rwts: total=12914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.250 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1657704: Mon Dec 9 15:24:37 2024 00:31:36.250 read: IOPS=508, BW=2032KiB/s (2081kB/s)(6016KiB/2961msec) 00:31:36.250 slat (nsec): min=6871, max=33540, avg=9205.91, stdev=3386.30 00:31:36.250 clat (usec): min=211, max=55868, avg=1943.10, stdev=8089.97 00:31:36.250 lat (usec): min=219, max=55879, avg=1952.30, stdev=8090.76 00:31:36.250 clat percentiles (usec): 00:31:36.250 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:31:36.250 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:31:36.250 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 383], 95.00th=[ 502], 00:31:36.250 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[55837], 00:31:36.250 | 99.99th=[55837] 00:31:36.250 bw ( KiB/s): min= 96, max= 5640, per=6.31%, avg=2387.20, stdev=2437.20, samples=5 00:31:36.250 iops : min= 24, max= 1410, avg=596.80, stdev=609.30, samples=5 00:31:36.250 lat (usec) : 250=32.82%, 500=62.06%, 750=1.00% 00:31:36.250 lat (msec) : 50=3.99%, 100=0.07% 00:31:36.250 cpu : usr=0.14%, sys=0.57%, ctx=1506, majf=0, minf=2 00:31:36.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 issued rwts: total=1505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.250 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1657705: Mon Dec 9 15:24:37 2024 00:31:36.250 read: IOPS=2359, BW=9435KiB/s (9661kB/s)(25.2MiB/2732msec) 00:31:36.250 slat (nsec): min=6885, max=48739, avg=8308.77, stdev=2019.04 00:31:36.250 clat (usec): min=191, max=41960, avg=410.55, stdev=2586.36 00:31:36.250 lat (usec): min=199, max=41984, avg=418.85, stdev=2586.98 00:31:36.250 clat percentiles (usec): 00:31:36.250 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:31:36.250 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 247], 00:31:36.250 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 306], 00:31:36.250 | 99.00th=[ 461], 99.50th=[ 578], 99.90th=[41157], 99.95th=[41157], 00:31:36.250 | 99.99th=[42206] 00:31:36.250 bw ( KiB/s): min= 96, max=14552, per=27.22%, avg=10300.80, stdev=6004.69, samples=5 00:31:36.250 iops : min= 24, max= 3638, avg=2575.20, stdev=1501.17, samples=5 00:31:36.250 lat (usec) : 250=69.34%, 500=29.93%, 750=0.28% 00:31:36.250 lat (msec) : 2=0.03%, 50=0.40% 00:31:36.250 cpu : usr=1.06%, sys=3.81%, ctx=6445, majf=0, minf=2 00:31:36.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.250 issued rwts: total=6445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:36.250 00:31:36.250 Run status group 0 (all jobs): 00:31:36.250 READ: bw=37.0MiB/s (38.7MB/s), 2032KiB/s-14.9MiB/s (2081kB/s-15.7MB/s), io=125MiB (131MB), run=2732-3378msec 00:31:36.250 00:31:36.250 Disk stats (read/write): 00:31:36.250 nvme0n1: ios=10955/0, merge=0/0, ticks=2893/0, in_queue=2893, util=93.74% 00:31:36.250 nvme0n2: ios=12907/0, merge=0/0, ticks=3535/0, in_queue=3535, util=97.13% 00:31:36.250 nvme0n3: ios=1536/0, merge=0/0, ticks=3509/0, in_queue=3509, util=99.93% 00:31:36.250 nvme0n4: ios=6441/0, merge=0/0, ticks=2443/0, in_queue=2443, util=96.40% 00:31:36.250 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.250 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:36.507 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.507 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:36.762 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:36.762 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:37.018 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:37.018 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1657562 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:37.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:37.274 nvmf hotplug test: fio failed as expected 00:31:37.274 15:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.532 rmmod nvme_tcp 00:31:37.532 rmmod nvme_fabrics 00:31:37.532 rmmod nvme_keyring 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1654910 ']' 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1654910 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1654910 ']' 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1654910 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.532 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654910 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654910' 00:31:37.791 killing process with pid 1654910 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1654910 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1654910 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.791 15:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.326 00:31:40.326 real 0m25.842s 00:31:40.326 user 1m31.573s 00:31:40.326 sys 0m11.608s 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.326 ************************************ 00:31:40.326 END TEST nvmf_fio_target 00:31:40.326 ************************************ 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.326 ************************************ 00:31:40.326 START TEST nvmf_bdevio 00:31:40.326 ************************************ 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:40.326 * Looking for test storage... 00:31:40.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.326 --rc genhtml_branch_coverage=1 00:31:40.326 --rc genhtml_function_coverage=1 00:31:40.326 --rc genhtml_legend=1 00:31:40.326 --rc geninfo_all_blocks=1 00:31:40.326 --rc geninfo_unexecuted_blocks=1 00:31:40.326 00:31:40.326 ' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.326 --rc genhtml_branch_coverage=1 00:31:40.326 --rc genhtml_function_coverage=1 00:31:40.326 --rc genhtml_legend=1 00:31:40.326 --rc geninfo_all_blocks=1 00:31:40.326 --rc geninfo_unexecuted_blocks=1 00:31:40.326 00:31:40.326 ' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.326 --rc genhtml_branch_coverage=1 00:31:40.326 --rc genhtml_function_coverage=1 00:31:40.326 --rc genhtml_legend=1 00:31:40.326 --rc geninfo_all_blocks=1 00:31:40.326 --rc geninfo_unexecuted_blocks=1 00:31:40.326 00:31:40.326 ' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.326 --rc genhtml_branch_coverage=1 00:31:40.326 --rc genhtml_function_coverage=1 00:31:40.326 --rc genhtml_legend=1 00:31:40.326 --rc geninfo_all_blocks=1 00:31:40.326 --rc geninfo_unexecuted_blocks=1 00:31:40.326 00:31:40.326 ' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.326 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.327 15:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.892 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:46.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:46.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:46.893 Found net devices under 0000:af:00.0: cvl_0_0 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:46.893 Found net devices under 0000:af:00.1: cvl_0_1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:31:46.893 00:31:46.893 --- 10.0.0.2 ping statistics --- 00:31:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.893 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:31:46.893 00:31:46.893 --- 10.0.0.1 ping statistics --- 00:31:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.893 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.893 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1661893 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1661893 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1661893 ']' 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.894 15:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 [2024-12-09 15:24:47.797580] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.894 [2024-12-09 15:24:47.798527] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:46.894 [2024-12-09 15:24:47.798565] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.894 [2024-12-09 15:24:47.877054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.894 [2024-12-09 15:24:47.918146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.894 [2024-12-09 15:24:47.918184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.894 [2024-12-09 15:24:47.918191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.894 [2024-12-09 15:24:47.918197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.894 [2024-12-09 15:24:47.918202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.894 [2024-12-09 15:24:47.919753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:46.894 [2024-12-09 15:24:47.919866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:46.894 [2024-12-09 15:24:47.919971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.894 [2024-12-09 15:24:47.919972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:46.894 [2024-12-09 15:24:47.987240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.894 [2024-12-09 15:24:47.987633] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.894 [2024-12-09 15:24:47.988048] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:46.894 [2024-12-09 15:24:47.988176] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:46.894 [2024-12-09 15:24:47.988253] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 [2024-12-09 15:24:48.052717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 Malloc0 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:46.894 [2024-12-09 15:24:48.136888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:46.894 { 00:31:46.894 "params": { 00:31:46.894 "name": "Nvme$subsystem", 00:31:46.894 "trtype": "$TEST_TRANSPORT", 00:31:46.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.894 "adrfam": "ipv4", 00:31:46.894 "trsvcid": "$NVMF_PORT", 00:31:46.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.894 "hdgst": ${hdgst:-false}, 00:31:46.894 "ddgst": ${ddgst:-false} 00:31:46.894 }, 00:31:46.894 "method": "bdev_nvme_attach_controller" 00:31:46.894 } 00:31:46.894 EOF 00:31:46.894 )") 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:46.894 15:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:46.894 "params": { 00:31:46.894 "name": "Nvme1", 00:31:46.894 "trtype": "tcp", 00:31:46.894 "traddr": "10.0.0.2", 00:31:46.894 "adrfam": "ipv4", 00:31:46.894 "trsvcid": "4420", 00:31:46.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:46.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:46.894 "hdgst": false, 00:31:46.894 "ddgst": false 00:31:46.894 }, 00:31:46.894 "method": "bdev_nvme_attach_controller" 00:31:46.894 }' 00:31:46.894 [2024-12-09 15:24:48.187539] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:46.894 [2024-12-09 15:24:48.187584] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662003 ] 00:31:46.894 [2024-12-09 15:24:48.262975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:46.894 [2024-12-09 15:24:48.305178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.894 [2024-12-09 15:24:48.305073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.894 [2024-12-09 15:24:48.305178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.894 I/O targets: 00:31:46.894 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:46.894 00:31:46.894 00:31:46.894 CUnit - A unit testing framework for C - Version 2.1-3 00:31:46.894 http://cunit.sourceforge.net/ 00:31:46.894 00:31:46.894 00:31:46.894 Suite: bdevio tests on: Nvme1n1 00:31:46.894 Test: blockdev write read block ...passed 00:31:46.894 Test: blockdev write zeroes read block ...passed 00:31:46.894 Test: blockdev write zeroes read no split ...passed 00:31:46.894 Test: blockdev write zeroes read split ...passed 00:31:46.894 Test: blockdev write zeroes read split partial ...passed 00:31:46.894 Test: blockdev reset ...[2024-12-09 15:24:48.602813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:46.894 [2024-12-09 15:24:48.602871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d48b0 (9): Bad file descriptor 00:31:46.895 [2024-12-09 15:24:48.607196] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:46.895 passed 00:31:46.895 Test: blockdev write read 8 blocks ...passed 00:31:46.895 Test: blockdev write read size > 128k ...passed 00:31:46.895 Test: blockdev write read invalid size ...passed 00:31:46.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:46.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:46.895 Test: blockdev write read max offset ...passed 00:31:47.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:47.152 Test: blockdev writev readv 8 blocks ...passed 00:31:47.152 Test: blockdev writev readv 30 x 1block ...passed 00:31:47.152 Test: blockdev writev readv block ...passed 00:31:47.152 Test: blockdev writev readv size > 128k ...passed 00:31:47.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:47.152 Test: blockdev comparev and writev ...[2024-12-09 15:24:48.776093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.776134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.776456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.776734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.776759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.776767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.777043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.777053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:47.152 [2024-12-09 15:24:48.777064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:47.152 [2024-12-09 15:24:48.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:47.152 passed 00:31:47.152 Test: blockdev nvme passthru rw ...passed 00:31:47.152 Test: blockdev nvme passthru vendor specific ...[2024-12-09 15:24:48.858598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.153 [2024-12-09 15:24:48.858613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:47.153 [2024-12-09 15:24:48.858723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.153 [2024-12-09 15:24:48.858732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:47.153 [2024-12-09 15:24:48.858833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.153 [2024-12-09 15:24:48.858842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:47.153 [2024-12-09 15:24:48.858943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:47.153 [2024-12-09 15:24:48.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:47.153 passed 00:31:47.153 Test: blockdev nvme admin passthru ...passed 00:31:47.153 Test: blockdev copy ...passed 00:31:47.153 00:31:47.153 Run Summary: Type Total Ran Passed Failed Inactive 00:31:47.153 suites 1 1 n/a 0 0 00:31:47.153 tests 23 23 23 0 0 00:31:47.153 asserts 152 152 152 0 n/a 00:31:47.153 00:31:47.153 Elapsed time = 0.924 seconds 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.411 rmmod nvme_tcp 00:31:47.411 rmmod nvme_fabrics 00:31:47.411 rmmod nvme_keyring 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1661893 ']' 00:31:47.411 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1661893 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1661893 ']' 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1661893 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661893 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661893' 00:31:47.412 killing process with pid 1661893 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1661893 00:31:47.412 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1661893 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.671 15:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.203 00:31:50.203 real 0m9.801s 00:31:50.203 user 0m7.848s 00:31:50.203 sys 0m5.121s 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:50.203 ************************************ 00:31:50.203 END TEST nvmf_bdevio 00:31:50.203 ************************************ 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:50.203 00:31:50.203 real 4m31.572s 00:31:50.203 user 9m4.302s 00:31:50.203 sys 1m49.960s 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.203 15:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:50.203 ************************************ 00:31:50.203 END TEST nvmf_target_core_interrupt_mode 00:31:50.203 ************************************ 00:31:50.203 15:24:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:50.203 15:24:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:50.203 15:24:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.203 15:24:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.203 ************************************ 00:31:50.203 START TEST nvmf_interrupt 00:31:50.203 ************************************ 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:50.203 * Looking for test storage... 00:31:50.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:50.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.203 --rc genhtml_branch_coverage=1 00:31:50.203 --rc genhtml_function_coverage=1 00:31:50.203 --rc genhtml_legend=1 00:31:50.203 --rc geninfo_all_blocks=1 00:31:50.203 --rc geninfo_unexecuted_blocks=1 00:31:50.203 00:31:50.203 ' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:50.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.203 --rc genhtml_branch_coverage=1 00:31:50.203 --rc genhtml_function_coverage=1 00:31:50.203 --rc genhtml_legend=1 00:31:50.203 --rc geninfo_all_blocks=1 00:31:50.203 --rc geninfo_unexecuted_blocks=1 00:31:50.203 00:31:50.203 ' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:50.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.203 --rc genhtml_branch_coverage=1 00:31:50.203 --rc genhtml_function_coverage=1 00:31:50.203 --rc genhtml_legend=1 00:31:50.203 --rc geninfo_all_blocks=1 00:31:50.203 --rc geninfo_unexecuted_blocks=1 00:31:50.203 00:31:50.203 ' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:50.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.203 --rc genhtml_branch_coverage=1 00:31:50.203 --rc genhtml_function_coverage=1 00:31:50.203 --rc genhtml_legend=1 00:31:50.203 --rc geninfo_all_blocks=1 00:31:50.203 --rc geninfo_unexecuted_blocks=1 00:31:50.203 00:31:50.203 ' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.203 15:24:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.204 15:24:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.815 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.816 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.816 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:31:56.816 00:31:56.816 --- 10.0.0.2 ping statistics --- 00:31:56.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.816 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:31:56.816 00:31:56.816 --- 10.0.0.1 ping statistics --- 00:31:56.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.816 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1665654 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1665654 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1665654 ']' 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.816 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.816 [2024-12-09 15:24:57.693075] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.816 [2024-12-09 15:24:57.693943] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:31:56.816 [2024-12-09 15:24:57.693976] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.816 [2024-12-09 15:24:57.756208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:56.816 [2024-12-09 15:24:57.796592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.816 [2024-12-09 15:24:57.796627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.816 [2024-12-09 15:24:57.796635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.816 [2024-12-09 15:24:57.796640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.816 [2024-12-09 15:24:57.796646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.816 [2024-12-09 15:24:57.801235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.816 [2024-12-09 15:24:57.801239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.816 [2024-12-09 15:24:57.868638] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.817 [2024-12-09 15:24:57.868690] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.817 [2024-12-09 15:24:57.868822] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:56.817 5000+0 records in 00:31:56.817 5000+0 records out 00:31:56.817 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0177606 s, 577 MB/s 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.817 15:24:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 AIO0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 [2024-12-09 15:24:58.013908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:56.817 [2024-12-09 15:24:58.054130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1665654 0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 0 idle 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665654 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.23 reactor_0' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665654 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.23 reactor_0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1665654 1 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 1 idle 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665660 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665660 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1665841 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1665654 0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1665654 0 busy 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:31:56.817 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665654 root 20 0 128.2g 46848 33792 R 60.0 0.0 0:00.32 reactor_0' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665654 root 20 0 128.2g 46848 33792 R 60.0 0.0 0:00.32 reactor_0 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1665654 1 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1665654 1 busy 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665660 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.23 reactor_1' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665660 root 20 0 128.2g 46848 33792 R 93.8 0.0 0:00.23 reactor_1 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:57.099 15:24:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1665841 00:32:07.071 Initializing NVMe Controllers 00:32:07.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:07.071 Controller IO queue size 256, less than required. 00:32:07.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:07.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:07.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:07.071 Initialization complete. Launching workers. 00:32:07.071 ======================================================== 00:32:07.071 Latency(us) 00:32:07.071 Device Information : IOPS MiB/s Average min max 00:32:07.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16569.20 64.72 15459.58 2363.96 30467.02 00:32:07.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16674.30 65.13 15356.93 3441.50 31601.00 00:32:07.071 ======================================================== 00:32:07.071 Total : 33243.49 129.86 15408.09 2363.96 31601.00 00:32:07.071 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1665654 0 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 0 idle 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665654 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.21 reactor_0' 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665654 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.21 reactor_0 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1665654 1 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 1 idle 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:32:07.071 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665660 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1' 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665660 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.330 15:25:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:07.896 15:25:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:07.897 15:25:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:07.897 15:25:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:07.897 15:25:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:07.897 15:25:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1665654 0 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 0 idle 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:09.802 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665654 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665654 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1665654 1 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1665654 1 idle 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1665654 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1665654 -w 256 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1665660 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1665660 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:10.061 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:10.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:10.320 15:25:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:10.320 rmmod nvme_tcp 00:32:10.320 rmmod nvme_fabrics 00:32:10.320 rmmod nvme_keyring 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1665654 ']' 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1665654 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1665654 ']' 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1665654 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1665654 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1665654' 00:32:10.320 killing process with pid 1665654 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1665654 00:32:10.320 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1665654 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:10.578 15:25:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.113 15:25:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.113 00:32:13.113 real 0m22.787s 00:32:13.113 user 0m39.653s 00:32:13.113 sys 0m8.351s 00:32:13.113 15:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.113 15:25:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:13.113 ************************************ 00:32:13.113 END TEST nvmf_interrupt 00:32:13.113 ************************************ 00:32:13.113 00:32:13.113 real 27m18.897s 00:32:13.113 user 56m18.962s 00:32:13.113 sys 9m18.223s 00:32:13.113 15:25:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.113 15:25:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.113 ************************************ 00:32:13.113 END TEST nvmf_tcp 00:32:13.113 ************************************ 00:32:13.113 15:25:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:13.113 15:25:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:13.113 15:25:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:13.113 15:25:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.113 15:25:14 -- common/autotest_common.sh@10 -- # set +x 00:32:13.113 ************************************ 00:32:13.113 START TEST spdkcli_nvmf_tcp 00:32:13.113 ************************************ 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:13.114 * Looking for test storage... 00:32:13.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.114 --rc genhtml_branch_coverage=1 00:32:13.114 --rc genhtml_function_coverage=1 00:32:13.114 --rc genhtml_legend=1 00:32:13.114 --rc geninfo_all_blocks=1 00:32:13.114 --rc geninfo_unexecuted_blocks=1 00:32:13.114 00:32:13.114 ' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.114 --rc genhtml_branch_coverage=1 00:32:13.114 --rc genhtml_function_coverage=1 00:32:13.114 --rc genhtml_legend=1 00:32:13.114 --rc geninfo_all_blocks=1 00:32:13.114 --rc geninfo_unexecuted_blocks=1 00:32:13.114 00:32:13.114 ' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.114 --rc genhtml_branch_coverage=1 00:32:13.114 --rc genhtml_function_coverage=1 00:32:13.114 --rc genhtml_legend=1 00:32:13.114 --rc geninfo_all_blocks=1 00:32:13.114 --rc geninfo_unexecuted_blocks=1 00:32:13.114 00:32:13.114 ' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.114 --rc genhtml_branch_coverage=1 00:32:13.114 --rc genhtml_function_coverage=1 00:32:13.114 --rc genhtml_legend=1 00:32:13.114 --rc geninfo_all_blocks=1 00:32:13.114 --rc geninfo_unexecuted_blocks=1 00:32:13.114 00:32:13.114 ' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1668567 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1668567 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1668567 ']' 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.114 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.114 [2024-12-09 15:25:14.702907] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:32:13.115 [2024-12-09 15:25:14.702957] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668567 ] 00:32:13.115 [2024-12-09 15:25:14.774940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:13.115 [2024-12-09 15:25:14.816337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.115 [2024-12-09 15:25:14.816340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.115 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.115 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:13.115 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:13.115 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.115 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.374 15:25:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:13.374 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:13.374 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:13.374 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:13.374 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:13.374 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:13.374 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:13.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:13.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:13.374 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:13.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:13.374 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:13.374 ' 00:32:15.903 [2024-12-09 15:25:17.629548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.278 [2024-12-09 15:25:18.969994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:19.806 [2024-12-09 15:25:21.461750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:22.337 [2024-12-09 15:25:23.616444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:23.710 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:23.710 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:23.710 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:23.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:23.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:23.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:23.710 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:23.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:23.710 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:23.711 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:23.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:23.711 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:23.711 15:25:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.277 15:25:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:24.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:24.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:24.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:24.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:24.277 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:24.277 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:24.277 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:24.277 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:24.277 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:24.277 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:24.277 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:24.277 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:24.278 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:24.278 ' 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:30.840 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:30.840 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:30.840 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:30.840 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1668567 ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668567' 00:32:30.840 killing process with pid 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1668567 ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1668567 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1668567 ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1668567 00:32:30.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1668567) - No such process 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1668567 is not found' 00:32:30.840 Process with pid 1668567 is not found 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:30.840 00:32:30.840 real 0m17.318s 00:32:30.840 user 0m38.173s 00:32:30.840 sys 0m0.780s 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.840 15:25:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.840 ************************************ 00:32:30.840 END TEST spdkcli_nvmf_tcp 00:32:30.840 ************************************ 00:32:30.840 15:25:31 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:30.840 15:25:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.840 15:25:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.840 15:25:31 -- common/autotest_common.sh@10 -- # set +x 00:32:30.840 ************************************ 00:32:30.840 START TEST nvmf_identify_passthru 00:32:30.840 ************************************ 00:32:30.840 15:25:31 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:30.840 * Looking for test storage... 00:32:30.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:30.840 15:25:31 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.840 15:25:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.840 15:25:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.840 15:25:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:30.840 15:25:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:30.841 15:25:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.841 15:25:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.841 --rc genhtml_branch_coverage=1 00:32:30.841 --rc genhtml_function_coverage=1 00:32:30.841 --rc genhtml_legend=1 00:32:30.841 --rc geninfo_all_blocks=1 00:32:30.841 --rc geninfo_unexecuted_blocks=1 00:32:30.841 00:32:30.841 ' 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.841 --rc genhtml_branch_coverage=1 00:32:30.841 --rc genhtml_function_coverage=1 00:32:30.841 --rc genhtml_legend=1 00:32:30.841 --rc geninfo_all_blocks=1 00:32:30.841 --rc geninfo_unexecuted_blocks=1 00:32:30.841 00:32:30.841 ' 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.841 --rc genhtml_branch_coverage=1 00:32:30.841 --rc genhtml_function_coverage=1 00:32:30.841 --rc genhtml_legend=1 00:32:30.841 --rc geninfo_all_blocks=1 00:32:30.841 --rc geninfo_unexecuted_blocks=1 00:32:30.841 00:32:30.841 ' 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.841 --rc genhtml_branch_coverage=1 00:32:30.841 --rc genhtml_function_coverage=1 00:32:30.841 --rc genhtml_legend=1 00:32:30.841 --rc geninfo_all_blocks=1 00:32:30.841 --rc geninfo_unexecuted_blocks=1 00:32:30.841 00:32:30.841 ' 00:32:30.841 15:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.841 15:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:30.841 15:25:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.841 15:25:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.841 15:25:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.841 15:25:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.116 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:36.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:36.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:36.117 Found net devices under 0000:af:00.0: cvl_0_0 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:36.117 Found net devices under 0000:af:00.1: cvl_0_1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:32:36.117 00:32:36.117 --- 10.0.0.2 ping statistics --- 00:32:36.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.117 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:32:36.117 00:32:36.117 --- 10.0.0.1 ping statistics --- 00:32:36.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.117 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.117 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.118 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.118 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.118 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.118 15:25:37 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.118 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:36.118 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.118 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:36.118 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:36.118 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:36.118 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:36.118 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:36.377 15:25:37 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:36.377 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:36.377 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:36.377 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:36.377 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:36.377 15:25:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:40.571 15:25:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:32:40.571 15:25:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:40.571 15:25:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:40.571 15:25:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1675754 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1675754 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1675754 ']' 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.757 [2024-12-09 15:25:46.379777] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:32:44.757 [2024-12-09 15:25:46.379825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.757 [2024-12-09 15:25:46.455666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:44.757 [2024-12-09 15:25:46.497459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.757 [2024-12-09 15:25:46.497497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.757 [2024-12-09 15:25:46.497505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.757 [2024-12-09 15:25:46.497511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.757 [2024-12-09 15:25:46.497517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.757 [2024-12-09 15:25:46.498862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.757 [2024-12-09 15:25:46.498973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.757 [2024-12-09 15:25:46.499078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.757 [2024-12-09 15:25:46.499079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.757 INFO: Log level set to 20 00:32:44.757 INFO: Requests: 00:32:44.757 { 00:32:44.757 "jsonrpc": "2.0", 00:32:44.757 "method": "nvmf_set_config", 00:32:44.757 "id": 1, 00:32:44.757 "params": { 00:32:44.757 "admin_cmd_passthru": { 00:32:44.757 "identify_ctrlr": true 00:32:44.757 } 00:32:44.757 } 00:32:44.757 } 00:32:44.757 00:32:44.757 INFO: response: 00:32:44.757 { 00:32:44.757 "jsonrpc": "2.0", 00:32:44.757 "id": 1, 00:32:44.757 "result": true 00:32:44.757 } 00:32:44.757 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.757 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.757 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.757 INFO: Setting log level to 20 00:32:44.757 INFO: Setting log level to 20 00:32:44.757 INFO: Log level set to 20 00:32:44.757 INFO: Log level set to 20 00:32:44.757 INFO: Requests: 00:32:44.757 { 00:32:44.757 "jsonrpc": "2.0", 00:32:44.757 "method": "framework_start_init", 00:32:44.757 "id": 1 00:32:44.757 } 00:32:44.757 00:32:44.757 INFO: Requests: 00:32:44.757 { 00:32:44.757 "jsonrpc": "2.0", 00:32:44.757 "method": "framework_start_init", 00:32:44.757 "id": 1 00:32:44.757 } 00:32:44.757 00:32:45.016 [2024-12-09 15:25:46.610321] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:45.016 INFO: response: 00:32:45.016 { 00:32:45.016 "jsonrpc": "2.0", 00:32:45.016 "id": 1, 00:32:45.016 "result": true 00:32:45.016 } 00:32:45.016 00:32:45.016 INFO: response: 00:32:45.016 { 00:32:45.016 "jsonrpc": "2.0", 00:32:45.016 "id": 1, 00:32:45.016 "result": true 00:32:45.016 } 00:32:45.016 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.016 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:45.016 INFO: Setting log level to 40 00:32:45.016 INFO: Setting log level to 40 00:32:45.016 INFO: Setting log level to 40 00:32:45.016 [2024-12-09 15:25:46.623575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.016 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:45.016 15:25:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.016 15:25:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 Nvme0n1 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 [2024-12-09 15:25:49.536427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 [ 00:32:48.296 { 00:32:48.296 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:48.296 "subtype": "Discovery", 00:32:48.296 "listen_addresses": [], 00:32:48.296 "allow_any_host": true, 00:32:48.296 "hosts": [] 00:32:48.296 }, 00:32:48.296 { 00:32:48.296 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:48.296 "subtype": "NVMe", 00:32:48.296 "listen_addresses": [ 00:32:48.296 { 00:32:48.296 "trtype": "TCP", 00:32:48.296 "adrfam": "IPv4", 00:32:48.296 "traddr": "10.0.0.2", 00:32:48.296 "trsvcid": "4420" 00:32:48.296 } 00:32:48.296 ], 00:32:48.296 "allow_any_host": true, 00:32:48.296 "hosts": [], 00:32:48.296 "serial_number": "SPDK00000000000001", 00:32:48.296 "model_number": "SPDK bdev Controller", 00:32:48.296 "max_namespaces": 1, 00:32:48.296 "min_cntlid": 1, 00:32:48.296 "max_cntlid": 65519, 00:32:48.296 "namespaces": [ 00:32:48.296 { 00:32:48.296 "nsid": 1, 00:32:48.296 "bdev_name": "Nvme0n1", 00:32:48.296 "name": "Nvme0n1", 00:32:48.296 "nguid": "64D0FF2ADFF3474DACFC955FFA1EE47C", 00:32:48.296 "uuid": "64d0ff2a-dff3-474d-acfc-955ffa1ee47c" 00:32:48.296 } 00:32:48.296 ] 00:32:48.296 } 00:32:48.296 ] 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:48.296 15:25:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:48.296 15:25:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.296 15:25:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.296 rmmod nvme_tcp 00:32:48.296 rmmod nvme_fabrics 00:32:48.296 rmmod nvme_keyring 00:32:48.296 15:25:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.296 15:25:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:48.296 15:25:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:48.296 15:25:50 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1675754 ']' 00:32:48.296 15:25:50 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1675754 00:32:48.296 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1675754 ']' 00:32:48.296 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1675754 00:32:48.296 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:48.296 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.296 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675754 00:32:48.554 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.554 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.554 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675754' 00:32:48.554 killing process with pid 1675754 00:32:48.554 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1675754 00:32:48.554 15:25:50 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1675754 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.928 15:25:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.928 15:25:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:49.928 15:25:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.832 15:25:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.832 00:32:51.832 real 0m21.786s 00:32:51.832 user 0m26.877s 00:32:51.832 sys 0m6.102s 00:32:51.832 15:25:53 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.832 15:25:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.832 ************************************ 00:32:51.832 END TEST nvmf_identify_passthru 00:32:51.832 ************************************ 00:32:52.091 15:25:53 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.091 15:25:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:52.091 15:25:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.091 15:25:53 -- common/autotest_common.sh@10 -- # set +x 00:32:52.091 ************************************ 00:32:52.091 START TEST nvmf_dif 00:32:52.091 ************************************ 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:52.091 * Looking for test storage... 00:32:52.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.091 --rc genhtml_branch_coverage=1 00:32:52.091 --rc genhtml_function_coverage=1 00:32:52.091 --rc genhtml_legend=1 00:32:52.091 --rc geninfo_all_blocks=1 00:32:52.091 --rc geninfo_unexecuted_blocks=1 00:32:52.091 00:32:52.091 ' 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.091 --rc genhtml_branch_coverage=1 00:32:52.091 --rc genhtml_function_coverage=1 00:32:52.091 --rc genhtml_legend=1 00:32:52.091 --rc geninfo_all_blocks=1 00:32:52.091 --rc geninfo_unexecuted_blocks=1 00:32:52.091 00:32:52.091 ' 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.091 --rc genhtml_branch_coverage=1 00:32:52.091 --rc genhtml_function_coverage=1 00:32:52.091 --rc genhtml_legend=1 00:32:52.091 --rc geninfo_all_blocks=1 00:32:52.091 --rc geninfo_unexecuted_blocks=1 00:32:52.091 00:32:52.091 ' 00:32:52.091 15:25:53 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.091 --rc genhtml_branch_coverage=1 00:32:52.091 --rc genhtml_function_coverage=1 00:32:52.091 --rc genhtml_legend=1 00:32:52.091 --rc geninfo_all_blocks=1 00:32:52.091 --rc geninfo_unexecuted_blocks=1 00:32:52.091 00:32:52.091 ' 00:32:52.091 15:25:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.091 15:25:53 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.091 15:25:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.351 15:25:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.351 15:25:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.351 15:25:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.351 15:25:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.351 15:25:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:52.351 15:25:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.351 15:25:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:52.351 15:25:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:52.351 15:25:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:52.351 15:25:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:52.351 15:25:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.351 15:25:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:52.351 15:25:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:52.351 15:25:53 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.351 15:25:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:58.916 15:25:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:58.917 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:58.917 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:58.917 Found net devices under 0000:af:00.0: cvl_0_0 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:58.917 Found net devices under 0000:af:00.1: cvl_0_1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:32:58.917 00:32:58.917 --- 10.0.0.2 ping statistics --- 00:32:58.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.917 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:32:58.917 00:32:58.917 --- 10.0.0.1 ping statistics --- 00:32:58.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.917 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:58.917 15:25:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.820 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:00.820 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:00.820 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:00.820 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.079 15:26:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:01.079 15:26:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1681201 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1681201 00:33:01.079 15:26:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1681201 ']' 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.079 15:26:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.079 [2024-12-09 15:26:02.836522] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:33:01.079 [2024-12-09 15:26:02.836564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.337 [2024-12-09 15:26:02.914642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.337 [2024-12-09 15:26:02.953512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.337 [2024-12-09 15:26:02.953546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.337 [2024-12-09 15:26:02.953554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.337 [2024-12-09 15:26:02.953560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.337 [2024-12-09 15:26:02.953565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.337 [2024-12-09 15:26:02.954099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:01.337 15:26:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 15:26:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.337 15:26:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:01.337 15:26:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 [2024-12-09 15:26:03.088429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.337 15:26:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.337 15:26:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.337 ************************************ 00:33:01.337 START TEST fio_dif_1_default 00:33:01.337 ************************************ 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.337 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.595 bdev_null0 00:33:01.595 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.595 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:01.596 [2024-12-09 15:26:03.160725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.596 { 00:33:01.596 "params": { 00:33:01.596 "name": "Nvme$subsystem", 00:33:01.596 "trtype": "$TEST_TRANSPORT", 00:33:01.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.596 "adrfam": "ipv4", 00:33:01.596 "trsvcid": "$NVMF_PORT", 00:33:01.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.596 "hdgst": ${hdgst:-false}, 00:33:01.596 "ddgst": ${ddgst:-false} 00:33:01.596 }, 00:33:01.596 "method": "bdev_nvme_attach_controller" 00:33:01.596 } 00:33:01.596 EOF 00:33:01.596 )") 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.596 "params": { 00:33:01.596 "name": "Nvme0", 00:33:01.596 "trtype": "tcp", 00:33:01.596 "traddr": "10.0.0.2", 00:33:01.596 "adrfam": "ipv4", 00:33:01.596 "trsvcid": "4420", 00:33:01.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.596 "hdgst": false, 00:33:01.596 "ddgst": false 00:33:01.596 }, 00:33:01.596 "method": "bdev_nvme_attach_controller" 00:33:01.596 }' 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:01.596 15:26:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.855 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:01.855 fio-3.35 00:33:01.855 Starting 1 thread 00:33:14.159 00:33:14.159 filename0: (groupid=0, jobs=1): err= 0: pid=1681568: Mon Dec 9 15:26:14 2024 00:33:14.159 read: IOPS=201, BW=806KiB/s (826kB/s)(8064KiB/10002msec) 00:33:14.159 slat (nsec): min=5799, max=32642, avg=6265.39, stdev=849.76 00:33:14.159 clat (usec): min=371, max=44771, avg=19827.54, stdev=20427.59 00:33:14.159 lat (usec): min=377, max=44804, avg=19833.81, stdev=20427.56 00:33:14.159 clat percentiles (usec): 00:33:14.159 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 429], 00:33:14.159 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 478], 60.00th=[40633], 00:33:14.159 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:14.159 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:14.159 | 99.99th=[44827] 00:33:14.159 bw ( KiB/s): min= 672, max= 896, per=99.60%, avg=803.37, stdev=53.22, samples=19 00:33:14.159 iops : min= 168, max= 224, avg=200.84, stdev=13.31, samples=19 00:33:14.159 lat (usec) : 500=52.28%, 750=0.30% 00:33:14.159 lat (msec) : 50=47.42% 00:33:14.159 cpu : usr=92.14%, sys=7.61%, ctx=13, majf=0, minf=0 00:33:14.159 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:14.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.159 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.159 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:14.159 00:33:14.159 Run status group 0 (all jobs): 00:33:14.159 READ: bw=806KiB/s (826kB/s), 806KiB/s-806KiB/s (826kB/s-826kB/s), io=8064KiB (8258kB), run=10002-10002msec 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.159 00:33:14.159 real 0m11.190s 00:33:14.159 user 0m16.657s 00:33:14.159 sys 0m1.057s 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:14.159 ************************************ 00:33:14.159 END TEST fio_dif_1_default 00:33:14.159 ************************************ 00:33:14.159 15:26:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:14.159 15:26:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.159 15:26:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.159 15:26:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:14.159 ************************************ 00:33:14.159 START TEST fio_dif_1_multi_subsystems 00:33:14.159 ************************************ 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.159 bdev_null0 00:33:14.159 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 [2024-12-09 15:26:14.420736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 bdev_null1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:14.160 { 00:33:14.160 "params": { 00:33:14.160 "name": "Nvme$subsystem", 00:33:14.160 "trtype": "$TEST_TRANSPORT", 00:33:14.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.160 "adrfam": "ipv4", 00:33:14.160 "trsvcid": "$NVMF_PORT", 00:33:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.160 "hdgst": ${hdgst:-false}, 00:33:14.160 "ddgst": ${ddgst:-false} 00:33:14.160 }, 00:33:14.160 "method": "bdev_nvme_attach_controller" 00:33:14.160 } 00:33:14.160 EOF 00:33:14.160 )") 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:14.160 { 00:33:14.160 "params": { 00:33:14.160 "name": "Nvme$subsystem", 00:33:14.160 "trtype": "$TEST_TRANSPORT", 00:33:14.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.160 "adrfam": "ipv4", 00:33:14.160 "trsvcid": "$NVMF_PORT", 00:33:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.160 "hdgst": ${hdgst:-false}, 00:33:14.160 "ddgst": ${ddgst:-false} 00:33:14.160 }, 00:33:14.160 "method": "bdev_nvme_attach_controller" 00:33:14.160 } 00:33:14.160 EOF 00:33:14.160 )") 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:14.160 "params": { 00:33:14.160 "name": "Nvme0", 00:33:14.160 "trtype": "tcp", 00:33:14.160 "traddr": "10.0.0.2", 00:33:14.160 "adrfam": "ipv4", 00:33:14.160 "trsvcid": "4420", 00:33:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:14.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:14.160 "hdgst": false, 00:33:14.160 "ddgst": false 00:33:14.160 }, 00:33:14.160 "method": "bdev_nvme_attach_controller" 00:33:14.160 },{ 00:33:14.160 "params": { 00:33:14.160 "name": "Nvme1", 00:33:14.160 "trtype": "tcp", 00:33:14.160 "traddr": "10.0.0.2", 00:33:14.160 "adrfam": "ipv4", 00:33:14.160 "trsvcid": "4420", 00:33:14.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.160 "hdgst": false, 00:33:14.160 "ddgst": false 00:33:14.160 }, 00:33:14.160 "method": "bdev_nvme_attach_controller" 00:33:14.160 }' 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:14.160 15:26:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:14.160 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:14.160 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:14.160 fio-3.35 00:33:14.160 Starting 2 threads 00:33:24.133 00:33:24.133 filename0: (groupid=0, jobs=1): err= 0: pid=1683512: Mon Dec 9 15:26:25 2024 00:33:24.133 read: IOPS=208, BW=833KiB/s (853kB/s)(8352KiB/10027msec) 00:33:24.133 slat (nsec): min=5846, max=36270, avg=6909.91, stdev=2326.98 00:33:24.133 clat (usec): min=376, max=42573, avg=19187.12, stdev=20328.27 00:33:24.133 lat (usec): min=382, max=42580, avg=19194.03, stdev=20327.65 00:33:24.133 clat percentiles (usec): 00:33:24.133 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:33:24.133 | 30.00th=[ 416], 40.00th=[ 433], 50.00th=[ 603], 60.00th=[40633], 00:33:24.133 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:24.134 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:24.134 | 99.99th=[42730] 00:33:24.134 bw ( KiB/s): min= 768, max= 960, per=67.62%, avg=833.60, stdev=60.96, samples=20 00:33:24.134 iops : min= 192, max= 240, avg=208.40, stdev=15.24, samples=20 00:33:24.134 lat (usec) : 500=46.22%, 750=7.42%, 1000=0.38% 00:33:24.134 lat (msec) : 50=45.98% 00:33:24.134 cpu : usr=96.25%, sys=3.48%, ctx=23, majf=0, minf=187 00:33:24.134 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.134 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.134 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:24.134 filename1: (groupid=0, jobs=1): err= 0: pid=1683513: Mon Dec 9 15:26:25 2024 00:33:24.134 read: IOPS=99, BW=399KiB/s (409kB/s)(4000KiB/10021msec) 00:33:24.134 slat (nsec): min=5862, max=46036, avg=7615.31, stdev=3159.98 00:33:24.134 clat (usec): min=393, max=42500, avg=40057.53, stdev=6223.66 00:33:24.134 lat (usec): min=399, max=42507, avg=40065.14, stdev=6223.70 00:33:24.134 clat percentiles (usec): 00:33:24.134 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:24.134 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:24.134 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:24.134 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:24.134 | 99.99th=[42730] 00:33:24.134 bw ( KiB/s): min= 384, max= 480, per=32.31%, avg=398.40, stdev=24.29, samples=20 00:33:24.134 iops : min= 96, max= 120, avg=99.60, stdev= 6.07, samples=20 00:33:24.134 lat (usec) : 500=2.40% 00:33:24.134 lat (msec) : 50=97.60% 00:33:24.134 cpu : usr=96.96%, sys=2.78%, ctx=10, majf=0, minf=103 00:33:24.134 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.134 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.134 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:24.134 00:33:24.134 Run status group 0 (all jobs): 00:33:24.134 READ: bw=1232KiB/s (1261kB/s), 399KiB/s-833KiB/s (409kB/s-853kB/s), io=12.1MiB (12.6MB), run=10021-10027msec 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 00:33:24.134 real 0m11.435s 00:33:24.134 user 0m26.661s 00:33:24.134 sys 0m1.004s 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 ************************************ 00:33:24.134 END TEST fio_dif_1_multi_subsystems 00:33:24.134 ************************************ 00:33:24.134 15:26:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:24.134 15:26:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:24.134 15:26:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 ************************************ 00:33:24.134 START TEST fio_dif_rand_params 00:33:24.134 ************************************ 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 bdev_null0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.134 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:24.393 [2024-12-09 15:26:25.933480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:24.393 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.394 { 00:33:24.394 "params": { 00:33:24.394 "name": "Nvme$subsystem", 00:33:24.394 "trtype": "$TEST_TRANSPORT", 00:33:24.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.394 "adrfam": "ipv4", 00:33:24.394 "trsvcid": "$NVMF_PORT", 00:33:24.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.394 "hdgst": ${hdgst:-false}, 00:33:24.394 "ddgst": ${ddgst:-false} 00:33:24.394 }, 00:33:24.394 "method": "bdev_nvme_attach_controller" 00:33:24.394 } 00:33:24.394 EOF 00:33:24.394 )") 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.394 "params": { 00:33:24.394 "name": "Nvme0", 00:33:24.394 "trtype": "tcp", 00:33:24.394 "traddr": "10.0.0.2", 00:33:24.394 "adrfam": "ipv4", 00:33:24.394 "trsvcid": "4420", 00:33:24.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.394 "hdgst": false, 00:33:24.394 "ddgst": false 00:33:24.394 }, 00:33:24.394 "method": "bdev_nvme_attach_controller" 00:33:24.394 }' 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:24.394 15:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.394 15:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.394 15:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.394 15:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:24.394 15:26:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.653 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:24.653 ... 00:33:24.653 fio-3.35 00:33:24.653 Starting 3 threads 00:33:31.219 00:33:31.219 filename0: (groupid=0, jobs=1): err= 0: pid=1685453: Mon Dec 9 15:26:31 2024 00:33:31.219 read: IOPS=327, BW=40.9MiB/s (42.9MB/s)(206MiB/5047msec) 00:33:31.219 slat (nsec): min=6184, max=32048, avg=11077.48, stdev=2252.39 00:33:31.219 clat (usec): min=3313, max=52721, avg=9131.40, stdev=5198.47 00:33:31.219 lat (usec): min=3319, max=52732, avg=9142.48, stdev=5198.57 00:33:31.219 clat percentiles (usec): 00:33:31.219 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 7504], 00:33:31.219 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:33:31.219 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10945], 00:33:31.219 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51643], 99.95th=[52691], 00:33:31.219 | 99.99th=[52691] 00:33:31.219 bw ( KiB/s): min=28672, max=48128, per=35.26%, avg=42188.80, stdev=5993.78, samples=10 00:33:31.219 iops : min= 224, max= 376, avg=329.60, stdev=46.83, samples=10 00:33:31.219 lat (msec) : 4=0.18%, 10=84.37%, 20=13.87%, 50=1.27%, 100=0.30% 00:33:31.219 cpu : usr=95.86%, sys=3.80%, ctx=13, majf=0, minf=11 00:33:31.220 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 issued rwts: total=1651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:31.220 filename0: (groupid=0, jobs=1): err= 0: pid=1685454: Mon Dec 9 15:26:31 2024 00:33:31.220 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(194MiB/5042msec) 00:33:31.220 slat (nsec): min=6171, max=39040, avg=11302.09, stdev=2394.26 00:33:31.220 clat (usec): min=3553, max=51572, avg=9731.83, stdev=5735.66 00:33:31.220 lat (usec): min=3565, max=51586, avg=9743.13, stdev=5735.65 00:33:31.220 clat percentiles (usec): 00:33:31.220 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 7898], 00:33:31.220 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:33:31.220 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:33:31.220 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:33:31.220 | 99.99th=[51643] 00:33:31.220 bw ( KiB/s): min=34048, max=43520, per=33.14%, avg=39654.40, stdev=3498.56, samples=10 00:33:31.220 iops : min= 266, max= 340, avg=309.80, stdev=27.33, samples=10 00:33:31.220 lat (msec) : 4=0.39%, 10=72.36%, 20=25.32%, 50=1.35%, 100=0.58% 00:33:31.220 cpu : usr=95.44%, sys=4.27%, ctx=11, majf=0, minf=9 00:33:31.220 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:31.220 filename0: (groupid=0, jobs=1): err= 0: pid=1685455: Mon Dec 9 15:26:31 2024 00:33:31.220 read: IOPS=302, BW=37.9MiB/s (39.7MB/s)(189MiB/5003msec) 00:33:31.220 slat (nsec): min=6212, max=28828, avg=11482.44, stdev=2131.65 00:33:31.220 clat (usec): min=3552, max=50469, avg=9892.54, stdev=3159.47 00:33:31.220 lat (usec): min=3561, max=50495, avg=9904.02, stdev=3159.94 00:33:31.220 clat percentiles (usec): 00:33:31.220 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 8356], 00:33:31.220 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:33:31.220 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[12256], 00:33:31.220 | 99.00th=[13566], 99.50th=[14353], 99.90th=[50594], 99.95th=[50594], 00:33:31.220 | 99.99th=[50594] 00:33:31.220 bw ( KiB/s): min=35328, max=44288, per=32.37%, avg=38732.80, stdev=2893.92, samples=10 00:33:31.220 iops : min= 276, max= 346, avg=302.60, stdev=22.61, samples=10 00:33:31.220 lat (msec) : 4=0.40%, 10=43.96%, 20=55.25%, 50=0.20%, 100=0.20% 00:33:31.220 cpu : usr=96.08%, sys=3.62%, ctx=7, majf=0, minf=9 00:33:31.220 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:31.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.220 issued rwts: total=1515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:31.220 00:33:31.220 Run status group 0 (all jobs): 00:33:31.220 READ: bw=117MiB/s (123MB/s), 37.9MiB/s-40.9MiB/s (39.7MB/s-42.9MB/s), io=590MiB (618MB), run=5003-5047msec 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 bdev_null0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 [2024-12-09 15:26:32.060572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 bdev_null1 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 bdev_null2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.220 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.221 { 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme$subsystem", 00:33:31.221 "trtype": "$TEST_TRANSPORT", 00:33:31.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "$NVMF_PORT", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.221 "hdgst": ${hdgst:-false}, 00:33:31.221 "ddgst": ${ddgst:-false} 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 } 00:33:31.221 EOF 00:33:31.221 )") 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.221 { 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme$subsystem", 00:33:31.221 "trtype": "$TEST_TRANSPORT", 00:33:31.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "$NVMF_PORT", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.221 "hdgst": ${hdgst:-false}, 00:33:31.221 "ddgst": ${ddgst:-false} 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 } 00:33:31.221 EOF 00:33:31.221 )") 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.221 { 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme$subsystem", 00:33:31.221 "trtype": "$TEST_TRANSPORT", 00:33:31.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "$NVMF_PORT", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.221 "hdgst": ${hdgst:-false}, 00:33:31.221 "ddgst": ${ddgst:-false} 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 } 00:33:31.221 EOF 00:33:31.221 )") 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme0", 00:33:31.221 "trtype": "tcp", 00:33:31.221 "traddr": "10.0.0.2", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "4420", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.221 "hdgst": false, 00:33:31.221 "ddgst": false 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 },{ 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme1", 00:33:31.221 "trtype": "tcp", 00:33:31.221 "traddr": "10.0.0.2", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "4420", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.221 "hdgst": false, 00:33:31.221 "ddgst": false 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 },{ 00:33:31.221 "params": { 00:33:31.221 "name": "Nvme2", 00:33:31.221 "trtype": "tcp", 00:33:31.221 "traddr": "10.0.0.2", 00:33:31.221 "adrfam": "ipv4", 00:33:31.221 "trsvcid": "4420", 00:33:31.221 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:31.221 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:31.221 "hdgst": false, 00:33:31.221 "ddgst": false 00:33:31.221 }, 00:33:31.221 "method": "bdev_nvme_attach_controller" 00:33:31.221 }' 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:31.221 15:26:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:31.221 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:31.221 ... 00:33:31.221 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:31.221 ... 00:33:31.221 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:31.221 ... 00:33:31.221 fio-3.35 00:33:31.221 Starting 24 threads 00:33:43.417 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686542: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10021msec) 00:33:43.417 slat (nsec): min=7536, max=90220, avg=32278.93, stdev=19668.43 00:33:43.417 clat (usec): min=9119, max=32969, avg=30067.89, stdev=1871.48 00:33:43.417 lat (usec): min=9138, max=32986, avg=30100.17, stdev=1872.97 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[17433], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.417 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.417 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:43.417 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32900], 99.95th=[32900], 00:33:43.417 | 99.99th=[32900] 00:33:43.417 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2105.35, stdev=76.75, samples=20 00:33:43.417 iops : min= 512, max= 574, avg=526.30, stdev=19.09, samples=20 00:33:43.417 lat (msec) : 10=0.27%, 20=0.83%, 50=98.90% 00:33:43.417 cpu : usr=98.65%, sys=0.94%, ctx=14, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686543: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:33:43.417 slat (nsec): min=7770, max=86144, avg=26011.61, stdev=16667.45 00:33:43.417 clat (usec): min=22766, max=45474, avg=30325.90, stdev=978.56 00:33:43.417 lat (usec): min=22782, max=45491, avg=30351.91, stdev=979.25 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:43.417 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.417 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.417 | 99.00th=[31065], 99.50th=[31327], 99.90th=[45351], 99.95th=[45351], 00:33:43.417 | 99.99th=[45351] 00:33:43.417 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.417 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.417 lat (msec) : 50=100.00% 00:33:43.417 cpu : usr=98.55%, sys=1.06%, ctx=13, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686544: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:33:43.417 slat (usec): min=7, max=102, avg=30.28, stdev=23.96 00:33:43.417 clat (usec): min=13029, max=37138, avg=30214.85, stdev=1450.87 00:33:43.417 lat (usec): min=13052, max=37180, avg=30245.13, stdev=1449.75 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[23987], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.417 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.417 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:43.417 | 99.00th=[31327], 99.50th=[31327], 99.90th=[35914], 99.95th=[36439], 00:33:43.417 | 99.99th=[36963] 00:33:43.417 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2098.95, stdev=64.03, samples=20 00:33:43.417 iops : min= 512, max= 544, avg=524.70, stdev=15.96, samples=20 00:33:43.417 lat (msec) : 20=0.61%, 50=99.39% 00:33:43.417 cpu : usr=98.53%, sys=1.07%, ctx=10, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686545: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:33:43.417 slat (usec): min=4, max=103, avg=44.22, stdev=24.36 00:33:43.417 clat (usec): min=19207, max=48869, avg=30150.67, stdev=1240.44 00:33:43.417 lat (usec): min=19223, max=48882, avg=30194.88, stdev=1240.76 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:43.417 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.417 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.417 | 99.00th=[31065], 99.50th=[31065], 99.90th=[49021], 99.95th=[49021], 00:33:43.417 | 99.99th=[49021] 00:33:43.417 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.417 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.417 lat (msec) : 20=0.31%, 50=99.69% 00:33:43.417 cpu : usr=98.60%, sys=1.00%, ctx=14, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686546: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:43.417 slat (nsec): min=7903, max=88787, avg=33200.95, stdev=18723.15 00:33:43.417 clat (usec): min=14887, max=52997, avg=30242.29, stdev=1540.58 00:33:43.417 lat (usec): min=14905, max=53048, avg=30275.49, stdev=1541.86 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.417 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.417 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.417 | 99.00th=[31065], 99.50th=[31327], 99.90th=[52691], 99.95th=[52691], 00:33:43.417 | 99.99th=[53216] 00:33:43.417 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.417 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.417 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:33:43.417 cpu : usr=98.69%, sys=0.92%, ctx=14, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.417 filename0: (groupid=0, jobs=1): err= 0: pid=1686547: Mon Dec 9 15:26:43 2024 00:33:43.417 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:43.417 slat (nsec): min=9214, max=88228, avg=33797.35, stdev=18780.87 00:33:43.417 clat (usec): min=14885, max=66107, avg=30238.33, stdev=1691.87 00:33:43.417 lat (usec): min=14910, max=66144, avg=30272.13, stdev=1692.90 00:33:43.417 clat percentiles (usec): 00:33:43.417 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.417 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.417 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.417 | 99.00th=[31065], 99.50th=[31327], 99.90th=[52691], 99.95th=[53216], 00:33:43.417 | 99.99th=[66323] 00:33:43.417 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.417 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.417 lat (msec) : 20=0.38%, 50=99.31%, 100=0.31% 00:33:43.417 cpu : usr=98.56%, sys=1.04%, ctx=16, majf=0, minf=9 00:33:43.417 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.417 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename0: (groupid=0, jobs=1): err= 0: pid=1686548: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10020msec) 00:33:43.418 slat (usec): min=8, max=106, avg=46.40, stdev=22.46 00:33:43.418 clat (usec): min=17064, max=50426, avg=30101.23, stdev=876.40 00:33:43.418 lat (usec): min=17108, max=50502, avg=30147.63, stdev=879.78 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:33:43.418 | 99.99th=[50594] 00:33:43.418 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2091.50, stdev=61.07, samples=20 00:33:43.418 iops : min= 512, max= 544, avg=522.85, stdev=15.24, samples=20 00:33:43.418 lat (msec) : 20=0.34%, 50=99.64%, 100=0.02% 00:33:43.418 cpu : usr=98.60%, sys=0.97%, ctx=35, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename0: (groupid=0, jobs=1): err= 0: pid=1686549: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10005msec) 00:33:43.418 slat (nsec): min=7489, max=86311, avg=21117.65, stdev=17129.95 00:33:43.418 clat (usec): min=13248, max=31605, avg=30215.76, stdev=1562.92 00:33:43.418 lat (usec): min=13264, max=31626, avg=30236.88, stdev=1563.13 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[25035], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:43.418 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.418 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:43.418 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:33:43.418 | 99.99th=[31589] 00:33:43.418 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2101.89, stdev=77.69, samples=19 00:33:43.418 iops : min= 512, max= 576, avg=525.47, stdev=19.42, samples=19 00:33:43.418 lat (msec) : 20=0.91%, 50=99.09% 00:33:43.418 cpu : usr=98.70%, sys=0.87%, ctx=15, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686551: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:43.418 slat (usec): min=4, max=105, avg=42.33, stdev=25.03 00:33:43.418 clat (usec): min=19264, max=46891, avg=30145.81, stdev=1146.93 00:33:43.418 lat (usec): min=19281, max=46904, avg=30188.14, stdev=1148.06 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[46924], 99.95th=[46924], 00:33:43.418 | 99.99th=[46924] 00:33:43.418 bw ( KiB/s): min= 1916, max= 2176, per=4.14%, avg=2088.21, stdev=75.05, samples=19 00:33:43.418 iops : min= 479, max= 544, avg=522.05, stdev=18.76, samples=19 00:33:43.418 lat (msec) : 20=0.31%, 50=99.69% 00:33:43.418 cpu : usr=98.59%, sys=1.00%, ctx=14, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686552: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10021msec) 00:33:43.418 slat (nsec): min=7781, max=88499, avg=34709.23, stdev=19535.27 00:33:43.418 clat (usec): min=9173, max=31427, avg=30018.67, stdev=1855.99 00:33:43.418 lat (usec): min=9199, max=31463, avg=30053.38, stdev=1858.55 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[20055], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:33:43.418 | 99.99th=[31327] 00:33:43.418 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2105.35, stdev=76.75, samples=20 00:33:43.418 iops : min= 512, max= 574, avg=526.30, stdev=19.09, samples=20 00:33:43.418 lat (msec) : 10=0.30%, 20=0.61%, 50=99.09% 00:33:43.418 cpu : usr=98.44%, sys=1.14%, ctx=15, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686553: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10004msec) 00:33:43.418 slat (usec): min=4, max=104, avg=43.58, stdev=24.52 00:33:43.418 clat (usec): min=19236, max=49750, avg=30156.33, stdev=1328.97 00:33:43.418 lat (usec): min=19253, max=49762, avg=30199.91, stdev=1329.39 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[49546], 99.95th=[49546], 00:33:43.418 | 99.99th=[49546] 00:33:43.418 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.418 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.418 lat (msec) : 20=0.31%, 50=99.69% 00:33:43.418 cpu : usr=98.69%, sys=0.92%, ctx=12, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686554: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:33:43.418 slat (nsec): min=8094, max=86156, avg=26640.29, stdev=17126.98 00:33:43.418 clat (usec): min=22774, max=45384, avg=30326.63, stdev=973.84 00:33:43.418 lat (usec): min=22793, max=45415, avg=30353.27, stdev=974.29 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:43.418 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31327], 99.90th=[45351], 99.95th=[45351], 00:33:43.418 | 99.99th=[45351] 00:33:43.418 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.418 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.418 lat (msec) : 50=100.00% 00:33:43.418 cpu : usr=98.63%, sys=0.98%, ctx=13, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686555: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10001msec) 00:33:43.418 slat (usec): min=4, max=101, avg=43.19, stdev=24.89 00:33:43.418 clat (usec): min=19248, max=46737, avg=30143.48, stdev=1141.55 00:33:43.418 lat (usec): min=19265, max=46750, avg=30186.67, stdev=1142.51 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[46924], 99.95th=[46924], 00:33:43.418 | 99.99th=[46924] 00:33:43.418 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.418 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.418 lat (msec) : 20=0.31%, 50=99.69% 00:33:43.418 cpu : usr=98.51%, sys=1.09%, ctx=14, majf=0, minf=9 00:33:43.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.418 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.418 filename1: (groupid=0, jobs=1): err= 0: pid=1686556: Mon Dec 9 15:26:43 2024 00:33:43.418 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:33:43.418 slat (usec): min=4, max=102, avg=44.03, stdev=24.35 00:33:43.418 clat (usec): min=13872, max=31214, avg=30082.02, stdev=889.68 00:33:43.418 lat (usec): min=13886, max=31248, avg=30126.05, stdev=893.87 00:33:43.418 clat percentiles (usec): 00:33:43.418 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:43.418 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.418 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:33:43.418 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:33:43.418 | 99.99th=[31327] 00:33:43.418 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2095.16, stdev=63.44, samples=19 00:33:43.418 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:33:43.418 lat (msec) : 20=0.29%, 50=99.71% 00:33:43.418 cpu : usr=98.62%, sys=0.98%, ctx=13, majf=0, minf=10 00:33:43.418 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename1: (groupid=0, jobs=1): err= 0: pid=1686557: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:33:43.419 slat (usec): min=8, max=102, avg=45.42, stdev=23.85 00:33:43.419 clat (usec): min=13018, max=31270, avg=30036.90, stdev=1377.14 00:33:43.419 lat (usec): min=13034, max=31285, avg=30082.33, stdev=1379.38 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:43.419 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:33:43.419 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:33:43.419 | 99.99th=[31327] 00:33:43.419 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2098.95, stdev=64.03, samples=20 00:33:43.419 iops : min= 512, max= 544, avg=524.70, stdev=15.96, samples=20 00:33:43.419 lat (msec) : 20=0.61%, 50=99.39% 00:33:43.419 cpu : usr=98.50%, sys=1.11%, ctx=16, majf=0, minf=9 00:33:43.419 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename1: (groupid=0, jobs=1): err= 0: pid=1686558: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=569, BW=2280KiB/s (2334kB/s)(22.3MiB/10002msec) 00:33:43.419 slat (usec): min=5, max=102, avg=24.62, stdev=20.94 00:33:43.419 clat (usec): min=13435, max=53532, avg=27877.72, stdev=4948.90 00:33:43.419 lat (usec): min=13444, max=53547, avg=27902.34, stdev=4954.73 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[18744], 5.00th=[18744], 10.00th=[20841], 20.00th=[22414], 00:33:43.419 | 30.00th=[26870], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:33:43.419 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:33:43.419 | 99.00th=[40633], 99.50th=[47449], 99.90th=[53740], 99.95th=[53740], 00:33:43.419 | 99.99th=[53740] 00:33:43.419 bw ( KiB/s): min= 1920, max= 2832, per=4.52%, avg=2280.42, stdev=258.56, samples=19 00:33:43.419 iops : min= 480, max= 708, avg=570.11, stdev=64.64, samples=19 00:33:43.419 lat (msec) : 20=8.12%, 50=91.60%, 100=0.28% 00:33:43.419 cpu : usr=98.58%, sys=1.02%, ctx=16, majf=0, minf=9 00:33:43.419 IO depths : 1=2.8%, 2=5.8%, 4=14.2%, 8=66.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=91.3%, 8=4.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686559: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=526, BW=2107KiB/s (2157kB/s)(20.6MiB/10006msec) 00:33:43.419 slat (nsec): min=7320, max=97163, avg=12986.34, stdev=9544.96 00:33:43.419 clat (usec): min=7931, max=32516, avg=30269.47, stdev=1803.16 00:33:43.419 lat (usec): min=7939, max=32547, avg=30282.45, stdev=1802.90 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[20317], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:33:43.419 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:33:43.419 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:33:43.419 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:33:43.419 | 99.99th=[32637] 00:33:43.419 bw ( KiB/s): min= 2048, max= 2347, per=4.17%, avg=2104.16, stdev=84.26, samples=19 00:33:43.419 iops : min= 512, max= 586, avg=526.00, stdev=20.94, samples=19 00:33:43.419 lat (msec) : 10=0.11%, 20=0.83%, 50=99.05% 00:33:43.419 cpu : usr=98.67%, sys=0.94%, ctx=13, majf=0, minf=9 00:33:43.419 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686561: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:33:43.419 slat (usec): min=8, max=102, avg=43.69, stdev=24.82 00:33:43.419 clat (usec): min=13320, max=31257, avg=30086.41, stdev=1364.64 00:33:43.419 lat (usec): min=13335, max=31272, avg=30130.10, stdev=1366.09 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:33:43.419 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.419 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:33:43.419 | 99.99th=[31327] 00:33:43.419 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2098.95, stdev=64.03, samples=20 00:33:43.419 iops : min= 512, max= 544, avg=524.70, stdev=15.96, samples=20 00:33:43.419 lat (msec) : 20=0.61%, 50=99.39% 00:33:43.419 cpu : usr=98.64%, sys=0.96%, ctx=15, majf=0, minf=9 00:33:43.419 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686562: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:43.419 slat (nsec): min=6368, max=86045, avg=26305.00, stdev=16710.28 00:33:43.419 clat (usec): min=22779, max=44346, avg=30315.49, stdev=929.13 00:33:43.419 lat (usec): min=22788, max=44364, avg=30341.79, stdev=929.64 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:43.419 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.419 | 99.00th=[31065], 99.50th=[31327], 99.90th=[44303], 99.95th=[44303], 00:33:43.419 | 99.99th=[44303] 00:33:43.419 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2088.58, stdev=74.17, samples=19 00:33:43.419 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.419 lat (msec) : 50=100.00% 00:33:43.419 cpu : usr=98.56%, sys=1.03%, ctx=14, majf=0, minf=9 00:33:43.419 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686563: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10001msec) 00:33:43.419 slat (nsec): min=7645, max=79501, avg=35892.46, stdev=14767.25 00:33:43.419 clat (usec): min=22004, max=50717, avg=30284.57, stdev=653.93 00:33:43.419 lat (usec): min=22012, max=50744, avg=30320.47, stdev=652.71 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:33:43.419 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.419 | 99.00th=[31065], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:33:43.419 | 99.99th=[50594] 00:33:43.419 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2088.42, stdev=61.13, samples=19 00:33:43.419 iops : min= 512, max= 544, avg=522.11, stdev=15.28, samples=19 00:33:43.419 lat (msec) : 50=99.96%, 100=0.04% 00:33:43.419 cpu : usr=98.48%, sys=1.05%, ctx=76, majf=0, minf=9 00:33:43.419 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686564: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=523, BW=2092KiB/s (2142kB/s)(20.4MiB/10003msec) 00:33:43.419 slat (nsec): min=6762, max=63482, avg=21699.80, stdev=8430.09 00:33:43.419 clat (usec): min=20347, max=55495, avg=30396.94, stdev=1263.74 00:33:43.419 lat (usec): min=20357, max=55518, avg=30418.64, stdev=1263.65 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:33:43.419 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:43.419 | 99.00th=[31327], 99.50th=[39060], 99.90th=[45351], 99.95th=[45351], 00:33:43.419 | 99.99th=[55313] 00:33:43.419 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.419 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.419 lat (msec) : 50=99.96%, 100=0.04% 00:33:43.419 cpu : usr=98.36%, sys=1.14%, ctx=50, majf=0, minf=9 00:33:43.419 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:43.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.419 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.419 filename2: (groupid=0, jobs=1): err= 0: pid=1686565: Mon Dec 9 15:26:43 2024 00:33:43.419 read: IOPS=525, BW=2101KiB/s (2152kB/s)(20.6MiB/10021msec) 00:33:43.419 slat (usec): min=7, max=127, avg=25.34, stdev=11.86 00:33:43.419 clat (usec): min=9583, max=31428, avg=30245.11, stdev=1345.73 00:33:43.419 lat (usec): min=9592, max=31453, avg=30270.45, stdev=1343.09 00:33:43.419 clat percentiles (usec): 00:33:43.419 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:33:43.419 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:33:43.419 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:33:43.419 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:33:43.419 | 99.99th=[31327] 00:33:43.419 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2099.20, stdev=64.34, samples=20 00:33:43.419 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:33:43.419 lat (msec) : 10=0.27%, 20=0.34%, 50=99.39% 00:33:43.420 cpu : usr=98.10%, sys=1.21%, ctx=100, majf=0, minf=11 00:33:43.420 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.420 filename2: (groupid=0, jobs=1): err= 0: pid=1686566: Mon Dec 9 15:26:43 2024 00:33:43.420 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:33:43.420 slat (nsec): min=7560, max=89876, avg=33262.28, stdev=18501.48 00:33:43.420 clat (usec): min=14844, max=53505, avg=30243.42, stdev=1571.28 00:33:43.420 lat (usec): min=14860, max=53530, avg=30276.68, stdev=1571.56 00:33:43.420 clat percentiles (usec): 00:33:43.420 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.420 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:33:43.420 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.420 | 99.00th=[31065], 99.50th=[31327], 99.90th=[53216], 99.95th=[53216], 00:33:43.420 | 99.99th=[53740] 00:33:43.420 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2088.42, stdev=74.55, samples=19 00:33:43.420 iops : min= 480, max= 544, avg=522.11, stdev=18.64, samples=19 00:33:43.420 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:33:43.420 cpu : usr=98.82%, sys=0.79%, ctx=17, majf=0, minf=9 00:33:43.420 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:43.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.420 filename2: (groupid=0, jobs=1): err= 0: pid=1686567: Mon Dec 9 15:26:43 2024 00:33:43.420 read: IOPS=523, BW=2095KiB/s (2146kB/s)(20.5MiB/10014msec) 00:33:43.420 slat (nsec): min=4866, max=97338, avg=37339.82, stdev=19948.23 00:33:43.420 clat (usec): min=14774, max=54473, avg=30169.99, stdev=1084.51 00:33:43.420 lat (usec): min=14789, max=54504, avg=30207.33, stdev=1085.33 00:33:43.420 clat percentiles (usec): 00:33:43.420 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:33:43.420 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:33:43.420 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:33:43.420 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[38011], 00:33:43.420 | 99.99th=[54264] 00:33:43.420 bw ( KiB/s): min= 2032, max= 2176, per=4.15%, avg=2092.00, stdev=63.34, samples=20 00:33:43.420 iops : min= 508, max= 544, avg=523.00, stdev=15.83, samples=20 00:33:43.420 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:33:43.420 cpu : usr=98.60%, sys=0.99%, ctx=14, majf=0, minf=9 00:33:43.420 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:43.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.420 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:43.420 00:33:43.420 Run status group 0 (all jobs): 00:33:43.420 READ: bw=49.3MiB/s (51.7MB/s), 2092KiB/s-2280KiB/s (2142kB/s-2334kB/s), io=494MiB (518MB), run=10001-10021msec 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 bdev_null0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 [2024-12-09 15:26:43.966572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 bdev_null1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.420 15:26:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.420 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.421 { 00:33:43.421 "params": { 00:33:43.421 "name": "Nvme$subsystem", 00:33:43.421 "trtype": "$TEST_TRANSPORT", 00:33:43.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.421 "adrfam": "ipv4", 00:33:43.421 "trsvcid": "$NVMF_PORT", 00:33:43.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.421 "hdgst": ${hdgst:-false}, 00:33:43.421 "ddgst": ${ddgst:-false} 00:33:43.421 }, 00:33:43.421 "method": "bdev_nvme_attach_controller" 00:33:43.421 } 00:33:43.421 EOF 00:33:43.421 )") 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.421 { 00:33:43.421 "params": { 00:33:43.421 "name": "Nvme$subsystem", 00:33:43.421 "trtype": "$TEST_TRANSPORT", 00:33:43.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.421 "adrfam": "ipv4", 00:33:43.421 "trsvcid": "$NVMF_PORT", 00:33:43.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.421 "hdgst": ${hdgst:-false}, 00:33:43.421 "ddgst": ${ddgst:-false} 00:33:43.421 }, 00:33:43.421 "method": "bdev_nvme_attach_controller" 00:33:43.421 } 00:33:43.421 EOF 00:33:43.421 )") 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.421 "params": { 00:33:43.421 "name": "Nvme0", 00:33:43.421 "trtype": "tcp", 00:33:43.421 "traddr": "10.0.0.2", 00:33:43.421 "adrfam": "ipv4", 00:33:43.421 "trsvcid": "4420", 00:33:43.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.421 "hdgst": false, 00:33:43.421 "ddgst": false 00:33:43.421 }, 00:33:43.421 "method": "bdev_nvme_attach_controller" 00:33:43.421 },{ 00:33:43.421 "params": { 00:33:43.421 "name": "Nvme1", 00:33:43.421 "trtype": "tcp", 00:33:43.421 "traddr": "10.0.0.2", 00:33:43.421 "adrfam": "ipv4", 00:33:43.421 "trsvcid": "4420", 00:33:43.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:43.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:43.421 "hdgst": false, 00:33:43.421 "ddgst": false 00:33:43.421 }, 00:33:43.421 "method": "bdev_nvme_attach_controller" 00:33:43.421 }' 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.421 15:26:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.421 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:43.421 ... 00:33:43.421 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:43.421 ... 00:33:43.421 fio-3.35 00:33:43.421 Starting 4 threads 00:33:48.691 00:33:48.691 filename0: (groupid=0, jobs=1): err= 0: pid=1688506: Mon Dec 9 15:26:50 2024 00:33:48.691 read: IOPS=2917, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:33:48.691 slat (nsec): min=5977, max=57778, avg=10745.80, stdev=5376.63 00:33:48.691 clat (usec): min=514, max=5551, avg=2707.50, stdev=426.40 00:33:48.691 lat (usec): min=529, max=5565, avg=2718.24, stdev=426.59 00:33:48.691 clat percentiles (usec): 00:33:48.691 | 1.00th=[ 1631], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:33:48.691 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2802], 00:33:48.691 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3392], 00:33:48.691 | 99.00th=[ 3818], 99.50th=[ 4113], 99.90th=[ 4817], 99.95th=[ 4883], 00:33:48.691 | 99.99th=[ 5342] 00:33:48.691 bw ( KiB/s): min=21760, max=25440, per=27.70%, avg=23347.56, stdev=1162.97, samples=9 00:33:48.692 iops : min= 2720, max= 3180, avg=2918.44, stdev=145.37, samples=9 00:33:48.692 lat (usec) : 750=0.02%, 1000=0.16% 00:33:48.692 lat (msec) : 2=3.00%, 4=96.10%, 10=0.72% 00:33:48.692 cpu : usr=96.60%, sys=3.08%, ctx=7, majf=0, minf=9 00:33:48.692 IO depths : 1=0.4%, 2=12.6%, 4=58.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 issued rwts: total=14593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.692 filename0: (groupid=0, jobs=1): err= 0: pid=1688507: Mon Dec 9 15:26:50 2024 00:33:48.692 read: IOPS=2661, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:33:48.692 slat (nsec): min=5993, max=62401, avg=11505.16, stdev=6218.90 00:33:48.692 clat (usec): min=753, max=6104, avg=2971.48, stdev=487.89 00:33:48.692 lat (usec): min=759, max=6117, avg=2982.99, stdev=487.97 00:33:48.692 clat percentiles (usec): 00:33:48.692 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:33:48.692 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 3032], 00:33:48.692 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3851], 00:33:48.692 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5407], 99.95th=[ 5604], 00:33:48.692 | 99.99th=[ 5997] 00:33:48.692 bw ( KiB/s): min=19792, max=24592, per=25.09%, avg=21149.44, stdev=1414.93, samples=9 00:33:48.692 iops : min= 2474, max= 3074, avg=2643.67, stdev=176.87, samples=9 00:33:48.692 lat (usec) : 1000=0.05% 00:33:48.692 lat (msec) : 2=1.01%, 4=95.42%, 10=3.52% 00:33:48.692 cpu : usr=96.98%, sys=2.66%, ctx=18, majf=0, minf=9 00:33:48.692 IO depths : 1=0.3%, 2=6.2%, 4=64.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 issued rwts: total=13309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.692 filename1: (groupid=0, jobs=1): err= 0: pid=1688509: Mon Dec 9 15:26:50 2024 00:33:48.692 read: IOPS=2483, BW=19.4MiB/s (20.3MB/s)(97.8MiB/5042msec) 00:33:48.692 slat (nsec): min=5979, max=62563, avg=11040.89, stdev=6144.03 00:33:48.692 clat (usec): min=602, max=42209, avg=3178.99, stdev=940.81 00:33:48.692 lat (usec): min=610, max=42216, avg=3190.03, stdev=940.45 00:33:48.692 clat percentiles (usec): 00:33:48.692 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:33:48.692 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3163], 00:33:48.692 | 70.00th=[ 3294], 80.00th=[ 3490], 90.00th=[ 3916], 95.00th=[ 4293], 00:33:48.692 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 5735], 99.95th=[ 5866], 00:33:48.692 | 99.99th=[42206] 00:33:48.692 bw ( KiB/s): min=18432, max=21392, per=23.76%, avg=20030.40, stdev=1033.77, samples=10 00:33:48.692 iops : min= 2304, max= 2674, avg=2503.80, stdev=129.22, samples=10 00:33:48.692 lat (usec) : 750=0.01%, 1000=0.05% 00:33:48.692 lat (msec) : 2=0.53%, 4=90.49%, 10=8.88%, 50=0.04% 00:33:48.692 cpu : usr=97.10%, sys=2.56%, ctx=16, majf=0, minf=9 00:33:48.692 IO depths : 1=0.3%, 2=3.1%, 4=68.7%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 issued rwts: total=12524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.692 filename1: (groupid=0, jobs=1): err= 0: pid=1688510: Mon Dec 9 15:26:50 2024 00:33:48.692 read: IOPS=2538, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5002msec) 00:33:48.692 slat (nsec): min=5910, max=60997, avg=13848.06, stdev=9606.85 00:33:48.692 clat (usec): min=650, max=5814, avg=3107.44, stdev=557.39 00:33:48.692 lat (usec): min=663, max=5820, avg=3121.29, stdev=556.97 00:33:48.692 clat percentiles (usec): 00:33:48.692 | 1.00th=[ 1942], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:33:48.692 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3097], 00:33:48.692 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3851], 95.00th=[ 4293], 00:33:48.692 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5604], 00:33:48.692 | 99.99th=[ 5800] 00:33:48.692 bw ( KiB/s): min=18256, max=21680, per=24.02%, avg=20245.33, stdev=1176.63, samples=9 00:33:48.692 iops : min= 2282, max= 2710, avg=2530.67, stdev=147.08, samples=9 00:33:48.692 lat (usec) : 750=0.02%, 1000=0.01% 00:33:48.692 lat (msec) : 2=1.26%, 4=90.61%, 10=8.10% 00:33:48.692 cpu : usr=95.50%, sys=3.36%, ctx=115, majf=0, minf=9 00:33:48.692 IO depths : 1=0.4%, 2=6.2%, 4=65.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.692 issued rwts: total=12700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:48.692 00:33:48.692 Run status group 0 (all jobs): 00:33:48.692 READ: bw=82.3MiB/s (86.3MB/s), 19.4MiB/s-22.8MiB/s (20.3MB/s-23.9MB/s), io=415MiB (435MB), run=5001-5042msec 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.692 00:33:48.692 real 0m24.557s 00:33:48.692 user 4m52.953s 00:33:48.692 sys 0m4.631s 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.692 15:26:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 ************************************ 00:33:48.692 END TEST fio_dif_rand_params 00:33:48.692 ************************************ 00:33:48.952 15:26:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:48.952 15:26:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:48.952 15:26:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.952 15:26:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:48.952 ************************************ 00:33:48.952 START TEST fio_dif_digest 00:33:48.952 ************************************ 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.952 bdev_null0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.952 [2024-12-09 15:26:50.559528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:48.952 { 00:33:48.952 "params": { 00:33:48.952 "name": "Nvme$subsystem", 00:33:48.952 "trtype": "$TEST_TRANSPORT", 00:33:48.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.952 "adrfam": "ipv4", 00:33:48.952 "trsvcid": "$NVMF_PORT", 00:33:48.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.952 "hdgst": ${hdgst:-false}, 00:33:48.952 "ddgst": ${ddgst:-false} 00:33:48.952 }, 00:33:48.952 "method": "bdev_nvme_attach_controller" 00:33:48.952 } 00:33:48.952 EOF 00:33:48.952 )") 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:48.952 "params": { 00:33:48.952 "name": "Nvme0", 00:33:48.952 "trtype": "tcp", 00:33:48.952 "traddr": "10.0.0.2", 00:33:48.952 "adrfam": "ipv4", 00:33:48.952 "trsvcid": "4420", 00:33:48.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:48.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:48.952 "hdgst": true, 00:33:48.952 "ddgst": true 00:33:48.952 }, 00:33:48.952 "method": "bdev_nvme_attach_controller" 00:33:48.952 }' 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:48.952 15:26:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.211 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:49.211 ... 00:33:49.211 fio-3.35 00:33:49.211 Starting 3 threads 00:34:01.420 00:34:01.420 filename0: (groupid=0, jobs=1): err= 0: pid=1689696: Mon Dec 9 15:27:01 2024 00:34:01.420 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10044msec) 00:34:01.420 slat (nsec): min=6657, max=70957, avg=27345.41, stdev=8901.79 00:34:01.420 clat (usec): min=5824, max=49300, avg=11293.96, stdev=1359.25 00:34:01.420 lat (usec): min=5853, max=49335, avg=11321.31, stdev=1359.38 00:34:01.420 clat percentiles (usec): 00:34:01.420 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:01.420 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:34:01.420 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:01.420 | 99.00th=[13042], 99.50th=[13435], 99.90th=[15401], 99.95th=[49546], 00:34:01.420 | 99.99th=[49546] 00:34:01.420 bw ( KiB/s): min=33280, max=36096, per=32.33%, avg=33984.00, stdev=562.56, samples=20 00:34:01.420 iops : min= 260, max= 282, avg=265.50, stdev= 4.39, samples=20 00:34:01.420 lat (msec) : 10=4.48%, 20=95.45%, 50=0.08% 00:34:01.420 cpu : usr=93.27%, sys=4.50%, ctx=558, majf=0, minf=126 00:34:01.420 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.420 issued rwts: total=2657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.420 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:01.420 filename0: (groupid=0, jobs=1): err= 0: pid=1689697: Mon Dec 9 15:27:01 2024 00:34:01.420 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(340MiB/10046msec) 00:34:01.420 slat (nsec): min=6365, max=45537, avg=17435.85, stdev=6739.32 00:34:01.421 clat (usec): min=6436, max=47293, avg=11031.55, stdev=1251.09 00:34:01.421 lat (usec): min=6458, max=47318, avg=11048.98, stdev=1250.89 00:34:01.421 clat percentiles (usec): 00:34:01.421 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:01.421 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:34:01.421 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:34:01.421 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14091], 99.95th=[45876], 00:34:01.421 | 99.99th=[47449] 00:34:01.421 bw ( KiB/s): min=34304, max=36352, per=33.14%, avg=34828.80, stdev=480.55, samples=20 00:34:01.421 iops : min= 268, max= 284, avg=272.10, stdev= 3.75, samples=20 00:34:01.421 lat (msec) : 10=8.41%, 20=91.52%, 50=0.07% 00:34:01.421 cpu : usr=96.24%, sys=3.43%, ctx=16, majf=0, minf=41 00:34:01.421 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.421 issued rwts: total=2723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.421 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:01.421 filename0: (groupid=0, jobs=1): err= 0: pid=1689698: Mon Dec 9 15:27:01 2024 00:34:01.421 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(359MiB/10047msec) 00:34:01.421 slat (nsec): min=6279, max=45323, avg=17180.65, stdev=6685.56 00:34:01.421 clat (usec): min=5147, max=50923, avg=10467.68, stdev=2208.98 00:34:01.421 lat (usec): min=5156, max=50943, avg=10484.86, stdev=2208.87 00:34:01.421 clat percentiles (usec): 00:34:01.421 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:34:01.421 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:34:01.421 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:34:01.421 | 99.00th=[11994], 99.50th=[12387], 99.90th=[51119], 99.95th=[51119], 00:34:01.421 | 99.99th=[51119] 00:34:01.421 bw ( KiB/s): min=31232, max=37632, per=34.93%, avg=36710.40, stdev=1329.96, samples=20 00:34:01.421 iops : min= 244, max= 294, avg=286.80, stdev=10.39, samples=20 00:34:01.421 lat (msec) : 10=28.47%, 20=71.25%, 50=0.14%, 100=0.14% 00:34:01.421 cpu : usr=96.53%, sys=3.15%, ctx=89, majf=0, minf=76 00:34:01.421 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:01.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:01.421 issued rwts: total=2870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:01.421 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:01.421 00:34:01.421 Run status group 0 (all jobs): 00:34:01.421 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-35.7MiB/s (34.7MB/s-37.4MB/s), io=1031MiB (1081MB), run=10044-10047msec 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.421 00:34:01.421 real 0m11.313s 00:34:01.421 user 0m35.771s 00:34:01.421 sys 0m1.481s 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.421 15:27:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.421 ************************************ 00:34:01.421 END TEST fio_dif_digest 00:34:01.421 ************************************ 00:34:01.421 15:27:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:01.421 15:27:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.421 rmmod nvme_tcp 00:34:01.421 rmmod nvme_fabrics 00:34:01.421 rmmod nvme_keyring 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1681201 ']' 00:34:01.421 15:27:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1681201 00:34:01.421 15:27:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1681201 ']' 00:34:01.421 15:27:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1681201 00:34:01.421 15:27:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:01.421 15:27:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.421 15:27:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681201 00:34:01.421 15:27:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:01.421 15:27:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:01.421 15:27:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681201' 00:34:01.421 killing process with pid 1681201 00:34:01.421 15:27:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1681201 00:34:01.421 15:27:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1681201 00:34:01.421 15:27:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:01.421 15:27:02 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:03.327 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:03.327 Waiting for block devices as requested 00:34:03.327 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:03.584 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:03.584 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:03.584 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:03.842 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:03.842 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:03.842 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:04.101 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:04.101 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:04.101 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:04.101 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:04.359 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:04.359 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:04.359 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:04.618 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:04.618 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:04.618 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.618 15:27:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.877 15:27:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.877 15:27:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.877 15:27:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.877 15:27:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.877 15:27:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.782 15:27:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.782 00:34:06.782 real 1m14.792s 00:34:06.782 user 7m12.178s 00:34:06.782 sys 0m20.099s 00:34:06.782 15:27:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.782 15:27:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:06.782 ************************************ 00:34:06.782 END TEST nvmf_dif 00:34:06.782 ************************************ 00:34:06.782 15:27:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:06.782 15:27:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:06.782 15:27:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.782 15:27:08 -- common/autotest_common.sh@10 -- # set +x 00:34:06.782 ************************************ 00:34:06.782 START TEST nvmf_abort_qd_sizes 00:34:06.782 ************************************ 00:34:06.782 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:07.041 * Looking for test storage... 00:34:07.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.041 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:07.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.042 --rc genhtml_branch_coverage=1 00:34:07.042 --rc genhtml_function_coverage=1 00:34:07.042 --rc genhtml_legend=1 00:34:07.042 --rc geninfo_all_blocks=1 00:34:07.042 --rc geninfo_unexecuted_blocks=1 00:34:07.042 00:34:07.042 ' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:07.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.042 --rc genhtml_branch_coverage=1 00:34:07.042 --rc genhtml_function_coverage=1 00:34:07.042 --rc genhtml_legend=1 00:34:07.042 --rc geninfo_all_blocks=1 00:34:07.042 --rc geninfo_unexecuted_blocks=1 00:34:07.042 00:34:07.042 ' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:07.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.042 --rc genhtml_branch_coverage=1 00:34:07.042 --rc genhtml_function_coverage=1 00:34:07.042 --rc genhtml_legend=1 00:34:07.042 --rc geninfo_all_blocks=1 00:34:07.042 --rc geninfo_unexecuted_blocks=1 00:34:07.042 00:34:07.042 ' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:07.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.042 --rc genhtml_branch_coverage=1 00:34:07.042 --rc genhtml_function_coverage=1 00:34:07.042 --rc genhtml_legend=1 00:34:07.042 --rc geninfo_all_blocks=1 00:34:07.042 --rc geninfo_unexecuted_blocks=1 00:34:07.042 00:34:07.042 ' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:07.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.042 15:27:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:13.615 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:13.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:13.615 Found net devices under 0000:af:00.0: cvl_0_0 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:13.615 Found net devices under 0000:af:00.1: cvl_0_1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:13.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:34:13.615 00:34:13.615 --- 10.0.0.2 ping statistics --- 00:34:13.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.615 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:34:13.615 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:34:13.616 00:34:13.616 --- 10.0.0.1 ping statistics --- 00:34:13.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.616 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:13.616 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.616 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:13.616 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:13.616 15:27:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:15.520 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:15.779 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:15.779 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:15.779 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:15.779 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:15.779 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:16.038 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:16.976 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1698200 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1698200 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1698200 ']' 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.976 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:16.976 [2024-12-09 15:27:18.737263] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:34:16.976 [2024-12-09 15:27:18.737310] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.235 [2024-12-09 15:27:18.814710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.235 [2024-12-09 15:27:18.855452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.235 [2024-12-09 15:27:18.855491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.236 [2024-12-09 15:27:18.855498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.236 [2024-12-09 15:27:18.855504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.236 [2024-12-09 15:27:18.855508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.236 [2024-12-09 15:27:18.856916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.236 [2024-12-09 15:27:18.857023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.236 [2024-12-09 15:27:18.857132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.236 [2024-12-09 15:27:18.857133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:17.236 15:27:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.236 15:27:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:17.495 ************************************ 00:34:17.495 START TEST spdk_target_abort 00:34:17.495 ************************************ 00:34:17.495 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:17.495 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:17.495 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:17.495 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.495 15:27:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.782 spdk_targetn1 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.782 [2024-12-09 15:27:21.880025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:20.782 [2024-12-09 15:27:21.924307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:20.782 15:27:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:24.069 Initializing NVMe Controllers 00:34:24.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:24.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:24.069 Initialization complete. Launching workers. 00:34:24.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16997, failed: 0 00:34:24.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 15746 00:34:24.069 success 759, unsuccessful 492, failed 0 00:34:24.069 15:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:24.069 15:27:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:27.364 Initializing NVMe Controllers 00:34:27.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:27.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:27.364 Initialization complete. Launching workers. 00:34:27.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8745, failed: 0 00:34:27.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7494 00:34:27.364 success 320, unsuccessful 931, failed 0 00:34:27.364 15:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:27.364 15:27:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.653 Initializing NVMe Controllers 00:34:30.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:30.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:30.653 Initialization complete. Launching workers. 00:34:30.653 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38888, failed: 0 00:34:30.653 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2924, failed to submit 35964 00:34:30.653 success 585, unsuccessful 2339, failed 0 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.653 15:27:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1698200 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1698200 ']' 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1698200 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698200 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698200' 00:34:31.590 killing process with pid 1698200 00:34:31.590 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1698200 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1698200 00:34:31.591 00:34:31.591 real 0m14.208s 00:34:31.591 user 0m54.243s 00:34:31.591 sys 0m2.566s 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:31.591 ************************************ 00:34:31.591 END TEST spdk_target_abort 00:34:31.591 ************************************ 00:34:31.591 15:27:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:31.591 15:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:31.591 15:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.591 15:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.591 ************************************ 00:34:31.591 START TEST kernel_target_abort 00:34:31.591 ************************************ 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:31.591 15:27:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:34.209 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:34.468 Waiting for block devices as requested 00:34:34.468 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:34.727 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:34.727 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:34.727 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:34.986 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:34.986 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:34.986 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:35.245 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:35.245 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:35.245 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.505 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:35.505 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:35.505 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:35.505 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:35.764 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:35.764 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:35.764 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:36.023 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:36.023 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:36.024 No valid GPT data, bailing 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:34:36.024 No valid GPT data, bailing 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:36.024 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:36.283 00:34:36.283 Discovery Log Number of Records 2, Generation counter 2 00:34:36.283 =====Discovery Log Entry 0====== 00:34:36.283 trtype: tcp 00:34:36.283 adrfam: ipv4 00:34:36.283 subtype: current discovery subsystem 00:34:36.283 treq: not specified, sq flow control disable supported 00:34:36.283 portid: 1 00:34:36.283 trsvcid: 4420 00:34:36.283 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:36.283 traddr: 10.0.0.1 00:34:36.283 eflags: none 00:34:36.283 sectype: none 00:34:36.283 =====Discovery Log Entry 1====== 00:34:36.283 trtype: tcp 00:34:36.283 adrfam: ipv4 00:34:36.283 subtype: nvme subsystem 00:34:36.283 treq: not specified, sq flow control disable supported 00:34:36.283 portid: 1 00:34:36.283 trsvcid: 4420 00:34:36.283 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:36.283 traddr: 10.0.0.1 00:34:36.283 eflags: none 00:34:36.283 sectype: none 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.283 15:27:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.571 Initializing NVMe Controllers 00:34:39.571 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.571 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.571 Initialization complete. Launching workers. 00:34:39.571 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84560, failed: 0 00:34:39.571 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 84560, failed to submit 0 00:34:39.571 success 0, unsuccessful 84560, failed 0 00:34:39.571 15:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:39.571 15:27:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:42.860 Initializing NVMe Controllers 00:34:42.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:42.860 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:42.860 Initialization complete. Launching workers. 00:34:42.860 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137236, failed: 0 00:34:42.860 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31946, failed to submit 105290 00:34:42.860 success 0, unsuccessful 31946, failed 0 00:34:42.860 15:27:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:42.860 15:27:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:46.148 Initializing NVMe Controllers 00:34:46.148 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:46.148 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:46.148 Initialization complete. Launching workers. 00:34:46.148 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130928, failed: 0 00:34:46.148 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32746, failed to submit 98182 00:34:46.148 success 0, unsuccessful 32746, failed 0 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:46.148 15:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:48.053 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:48.621 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:48.621 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:49.559 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:49.559 00:34:49.559 real 0m17.906s 00:34:49.559 user 0m8.988s 00:34:49.559 sys 0m5.283s 00:34:49.559 15:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.559 15:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.559 ************************************ 00:34:49.559 END TEST kernel_target_abort 00:34:49.559 ************************************ 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.559 rmmod nvme_tcp 00:34:49.559 rmmod nvme_fabrics 00:34:49.559 rmmod nvme_keyring 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1698200 ']' 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1698200 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1698200 ']' 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1698200 00:34:49.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1698200) - No such process 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1698200 is not found' 00:34:49.559 Process with pid 1698200 is not found 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:49.559 15:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:52.095 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:52.663 Waiting for block devices as requested 00:34:52.663 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.663 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.663 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.922 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:52.922 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:52.922 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.180 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.180 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.180 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.439 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.439 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.439 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.439 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.698 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.698 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.956 15:27:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.860 15:27:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.860 00:34:55.860 real 0m49.071s 00:34:55.860 user 1m7.857s 00:34:55.860 sys 0m16.641s 00:34:55.860 15:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.860 15:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.860 ************************************ 00:34:55.860 END TEST nvmf_abort_qd_sizes 00:34:55.860 ************************************ 00:34:56.120 15:27:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:56.120 15:27:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:56.120 15:27:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.120 15:27:57 -- common/autotest_common.sh@10 -- # set +x 00:34:56.120 ************************************ 00:34:56.120 START TEST keyring_file 00:34:56.120 ************************************ 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:56.120 * Looking for test storage... 00:34:56.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:56.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.120 --rc genhtml_branch_coverage=1 00:34:56.120 --rc genhtml_function_coverage=1 00:34:56.120 --rc genhtml_legend=1 00:34:56.120 --rc geninfo_all_blocks=1 00:34:56.120 --rc geninfo_unexecuted_blocks=1 00:34:56.120 00:34:56.120 ' 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:56.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.120 --rc genhtml_branch_coverage=1 00:34:56.120 --rc genhtml_function_coverage=1 00:34:56.120 --rc genhtml_legend=1 00:34:56.120 --rc geninfo_all_blocks=1 00:34:56.120 --rc geninfo_unexecuted_blocks=1 00:34:56.120 00:34:56.120 ' 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:56.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.120 --rc genhtml_branch_coverage=1 00:34:56.120 --rc genhtml_function_coverage=1 00:34:56.120 --rc genhtml_legend=1 00:34:56.120 --rc geninfo_all_blocks=1 00:34:56.120 --rc geninfo_unexecuted_blocks=1 00:34:56.120 00:34:56.120 ' 00:34:56.120 15:27:57 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:56.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.120 --rc genhtml_branch_coverage=1 00:34:56.120 --rc genhtml_function_coverage=1 00:34:56.120 --rc genhtml_legend=1 00:34:56.120 --rc geninfo_all_blocks=1 00:34:56.120 --rc geninfo_unexecuted_blocks=1 00:34:56.120 00:34:56.120 ' 00:34:56.120 15:27:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:56.120 15:27:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.120 15:27:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.120 15:27:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.120 15:27:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.120 15:27:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.120 15:27:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:56.120 15:27:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:56.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:56.120 15:27:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:56.121 15:27:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AFyT9GwWhn 00:34:56.121 15:27:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:56.121 15:27:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AFyT9GwWhn 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AFyT9GwWhn 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AFyT9GwWhn 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Tll5ECG18z 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:56.380 15:27:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Tll5ECG18z 00:34:56.380 15:27:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Tll5ECG18z 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Tll5ECG18z 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1706972 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1706972 00:34:56.380 15:27:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1706972 ']' 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.380 15:27:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.380 [2024-12-09 15:27:58.045401] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:34:56.380 [2024-12-09 15:27:58.045452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706972 ] 00:34:56.380 [2024-12-09 15:27:58.119774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.380 [2024-12-09 15:27:58.160117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:56.639 15:27:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.639 [2024-12-09 15:27:58.370642] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.639 null0 00:34:56.639 [2024-12-09 15:27:58.402695] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:56.639 [2024-12-09 15:27:58.402967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.639 15:27:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.639 15:27:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.639 [2024-12-09 15:27:58.430759] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:56.898 request: 00:34:56.898 { 00:34:56.898 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.898 "secure_channel": false, 00:34:56.898 "listen_address": { 00:34:56.898 "trtype": "tcp", 00:34:56.898 "traddr": "127.0.0.1", 00:34:56.898 "trsvcid": "4420" 00:34:56.898 }, 00:34:56.898 "method": "nvmf_subsystem_add_listener", 00:34:56.898 "req_id": 1 00:34:56.898 } 00:34:56.898 Got JSON-RPC error response 00:34:56.898 response: 00:34:56.898 { 00:34:56.898 "code": -32602, 00:34:56.898 "message": "Invalid parameters" 00:34:56.898 } 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:56.898 15:27:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=1706982 00:34:56.898 15:27:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1706982 /var/tmp/bperf.sock 00:34:56.898 15:27:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1706982 ']' 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:56.898 [2024-12-09 15:27:58.483111] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:34:56.898 [2024-12-09 15:27:58.483152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706982 ] 00:34:56.898 [2024-12-09 15:27:58.558775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.898 [2024-12-09 15:27:58.598994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.898 15:27:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:56.898 15:27:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:34:56.898 15:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:34:57.157 15:27:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Tll5ECG18z 00:34:57.157 15:27:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Tll5ECG18z 00:34:57.416 15:27:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:57.416 15:27:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:57.416 15:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.416 15:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.416 15:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.674 15:27:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AFyT9GwWhn == \/\t\m\p\/\t\m\p\.\A\F\y\T\9\G\w\W\h\n ]] 00:34:57.674 15:27:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:57.674 15:27:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:57.674 15:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.674 15:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.674 15:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:57.933 15:27:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Tll5ECG18z == \/\t\m\p\/\t\m\p\.\T\l\l\5\E\C\G\1\8\z ]] 00:34:57.933 15:27:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.933 15:27:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:57.933 15:27:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:57.933 15:27:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.192 15:27:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:58.192 15:27:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.192 15:27:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:58.450 [2024-12-09 15:28:00.054079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:58.450 nvme0n1 00:34:58.450 15:28:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:58.450 15:28:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:58.450 15:28:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.450 15:28:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.450 15:28:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:58.450 15:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.709 15:28:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:58.709 15:28:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:58.709 15:28:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:58.709 15:28:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.709 15:28:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.709 15:28:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.709 15:28:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.968 15:28:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:58.968 15:28:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.968 Running I/O for 1 seconds... 00:34:59.904 19402.00 IOPS, 75.79 MiB/s 00:34:59.904 Latency(us) 00:34:59.904 [2024-12-09T14:28:01.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.904 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:59.904 nvme0n1 : 1.00 19447.46 75.97 0.00 0.00 6570.03 2777.48 16852.11 00:34:59.904 [2024-12-09T14:28:01.700Z] =================================================================================================================== 00:34:59.905 [2024-12-09T14:28:01.700Z] Total : 19447.46 75.97 0.00 0.00 6570.03 2777.48 16852.11 00:34:59.905 { 00:34:59.905 "results": [ 00:34:59.905 { 00:34:59.905 "job": "nvme0n1", 00:34:59.905 "core_mask": "0x2", 00:34:59.905 "workload": "randrw", 00:34:59.905 "percentage": 50, 00:34:59.905 "status": "finished", 00:34:59.905 "queue_depth": 128, 00:34:59.905 "io_size": 4096, 00:34:59.905 "runtime": 1.004347, 00:34:59.905 "iops": 19447.461883193755, 00:34:59.905 "mibps": 75.9666479812256, 00:34:59.905 "io_failed": 0, 00:34:59.905 "io_timeout": 0, 00:34:59.905 "avg_latency_us": 6570.025861541013, 00:34:59.905 "min_latency_us": 2777.478095238095, 00:34:59.905 "max_latency_us": 16852.114285714284 00:34:59.905 } 00:34:59.905 ], 00:34:59.905 "core_count": 1 00:34:59.905 } 00:34:59.905 15:28:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:59.905 15:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:00.163 15:28:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:00.163 15:28:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.163 15:28:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.163 15:28:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.163 15:28:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.163 15:28:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.422 15:28:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:00.422 15:28:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:00.422 15:28:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:00.422 15:28:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.422 15:28:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.422 15:28:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.422 15:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.681 15:28:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:00.681 15:28:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:00.681 [2024-12-09 15:28:02.435724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:00.681 [2024-12-09 15:28:02.436434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a31770 (107): Transport endpoint is not connected 00:35:00.681 [2024-12-09 15:28:02.437430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a31770 (9): Bad file descriptor 00:35:00.681 [2024-12-09 15:28:02.438431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:00.681 [2024-12-09 15:28:02.438440] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:00.681 [2024-12-09 15:28:02.438448] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:00.681 [2024-12-09 15:28:02.438456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:00.681 request: 00:35:00.681 { 00:35:00.681 "name": "nvme0", 00:35:00.681 "trtype": "tcp", 00:35:00.681 "traddr": "127.0.0.1", 00:35:00.681 "adrfam": "ipv4", 00:35:00.681 "trsvcid": "4420", 00:35:00.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:00.681 "prchk_reftag": false, 00:35:00.681 "prchk_guard": false, 00:35:00.681 "hdgst": false, 00:35:00.681 "ddgst": false, 00:35:00.681 "psk": "key1", 00:35:00.681 "allow_unrecognized_csi": false, 00:35:00.681 "method": "bdev_nvme_attach_controller", 00:35:00.681 "req_id": 1 00:35:00.681 } 00:35:00.681 Got JSON-RPC error response 00:35:00.681 response: 00:35:00.681 { 00:35:00.681 "code": -5, 00:35:00.681 "message": "Input/output error" 00:35:00.681 } 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:00.681 15:28:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:00.681 15:28:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:00.681 15:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:00.940 15:28:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:00.940 15:28:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:00.940 15:28:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:00.940 15:28:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:00.940 15:28:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:00.940 15:28:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:00.940 15:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.199 15:28:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:01.199 15:28:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:01.199 15:28:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:01.458 15:28:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:01.458 15:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:01.458 15:28:03 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:01.458 15:28:03 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:01.458 15:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:01.717 15:28:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:01.717 15:28:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AFyT9GwWhn 00:35:01.717 15:28:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:01.717 15:28:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:01.717 15:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:01.976 [2024-12-09 15:28:03.603575] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AFyT9GwWhn': 0100660 00:35:01.976 [2024-12-09 15:28:03.603596] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:01.976 request: 00:35:01.976 { 00:35:01.976 "name": "key0", 00:35:01.976 "path": "/tmp/tmp.AFyT9GwWhn", 00:35:01.976 "method": "keyring_file_add_key", 00:35:01.976 "req_id": 1 00:35:01.976 } 00:35:01.976 Got JSON-RPC error response 00:35:01.976 response: 00:35:01.976 { 00:35:01.976 "code": -1, 00:35:01.976 "message": "Operation not permitted" 00:35:01.976 } 00:35:01.976 15:28:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:01.976 15:28:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:01.976 15:28:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:01.976 15:28:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:01.976 15:28:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AFyT9GwWhn 00:35:01.976 15:28:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:01.976 15:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AFyT9GwWhn 00:35:02.234 15:28:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AFyT9GwWhn 00:35:02.235 15:28:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:02.235 15:28:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:02.235 15:28:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:02.235 15:28:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.235 15:28:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:02.235 15:28:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.494 15:28:04 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:02.494 15:28:04 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.494 15:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:02.494 [2024-12-09 15:28:04.213181] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AFyT9GwWhn': No such file or directory 00:35:02.494 [2024-12-09 15:28:04.213199] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:02.494 [2024-12-09 15:28:04.213214] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:02.494 [2024-12-09 15:28:04.213240] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:02.494 [2024-12-09 15:28:04.213249] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:02.494 [2024-12-09 15:28:04.213255] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:02.494 request: 00:35:02.494 { 00:35:02.494 "name": "nvme0", 00:35:02.494 "trtype": "tcp", 00:35:02.494 "traddr": "127.0.0.1", 00:35:02.494 "adrfam": "ipv4", 00:35:02.494 "trsvcid": "4420", 00:35:02.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.494 "prchk_reftag": false, 00:35:02.494 "prchk_guard": false, 00:35:02.494 "hdgst": false, 00:35:02.494 "ddgst": false, 00:35:02.494 "psk": "key0", 00:35:02.494 "allow_unrecognized_csi": false, 00:35:02.494 "method": "bdev_nvme_attach_controller", 00:35:02.494 "req_id": 1 00:35:02.494 } 00:35:02.494 Got JSON-RPC error response 00:35:02.494 response: 00:35:02.494 { 00:35:02.494 "code": -19, 00:35:02.494 "message": "No such device" 00:35:02.494 } 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:02.494 15:28:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:02.494 15:28:04 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:02.494 15:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:02.753 15:28:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FGmqXdumSx 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:02.753 15:28:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FGmqXdumSx 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FGmqXdumSx 00:35:02.753 15:28:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FGmqXdumSx 00:35:02.753 15:28:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FGmqXdumSx 00:35:02.753 15:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FGmqXdumSx 00:35:03.011 15:28:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:03.011 15:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:03.270 nvme0n1 00:35:03.270 15:28:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:03.270 15:28:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.270 15:28:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.270 15:28:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.270 15:28:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.270 15:28:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.528 15:28:05 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:03.528 15:28:05 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:03.528 15:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:03.786 15:28:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:03.786 15:28:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:03.786 15:28:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:03.786 15:28:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:03.786 15:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.045 15:28:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:04.045 15:28:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:04.045 15:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:04.303 15:28:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:04.303 15:28:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:04.303 15:28:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.563 15:28:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:04.563 15:28:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FGmqXdumSx 00:35:04.563 15:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FGmqXdumSx 00:35:04.563 15:28:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Tll5ECG18z 00:35:04.563 15:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Tll5ECG18z 00:35:04.822 15:28:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:04.822 15:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:05.080 nvme0n1 00:35:05.080 15:28:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:05.080 15:28:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:05.340 15:28:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:05.340 "subsystems": [ 00:35:05.340 { 00:35:05.340 "subsystem": "keyring", 00:35:05.340 "config": [ 00:35:05.340 { 00:35:05.340 "method": "keyring_file_add_key", 00:35:05.340 "params": { 00:35:05.340 "name": "key0", 00:35:05.340 "path": "/tmp/tmp.FGmqXdumSx" 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "keyring_file_add_key", 00:35:05.340 "params": { 00:35:05.340 "name": "key1", 00:35:05.340 "path": "/tmp/tmp.Tll5ECG18z" 00:35:05.340 } 00:35:05.340 } 00:35:05.340 ] 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "subsystem": "iobuf", 00:35:05.340 "config": [ 00:35:05.340 { 00:35:05.340 "method": "iobuf_set_options", 00:35:05.340 "params": { 00:35:05.340 "small_pool_count": 8192, 00:35:05.340 "large_pool_count": 1024, 00:35:05.340 "small_bufsize": 8192, 00:35:05.340 "large_bufsize": 135168, 00:35:05.340 "enable_numa": false 00:35:05.340 } 00:35:05.340 } 00:35:05.340 ] 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "subsystem": "sock", 00:35:05.340 "config": [ 00:35:05.340 { 00:35:05.340 "method": "sock_set_default_impl", 00:35:05.340 "params": { 00:35:05.340 "impl_name": "posix" 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "sock_impl_set_options", 00:35:05.340 "params": { 00:35:05.340 "impl_name": "ssl", 00:35:05.340 "recv_buf_size": 4096, 00:35:05.340 "send_buf_size": 4096, 00:35:05.340 "enable_recv_pipe": true, 00:35:05.340 "enable_quickack": false, 00:35:05.340 "enable_placement_id": 0, 00:35:05.340 "enable_zerocopy_send_server": true, 00:35:05.340 "enable_zerocopy_send_client": false, 00:35:05.340 "zerocopy_threshold": 0, 00:35:05.340 "tls_version": 0, 00:35:05.340 "enable_ktls": false 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "sock_impl_set_options", 00:35:05.340 "params": { 00:35:05.340 "impl_name": "posix", 00:35:05.340 "recv_buf_size": 2097152, 00:35:05.340 "send_buf_size": 2097152, 00:35:05.340 "enable_recv_pipe": true, 00:35:05.340 "enable_quickack": false, 00:35:05.340 "enable_placement_id": 0, 00:35:05.340 "enable_zerocopy_send_server": true, 00:35:05.340 "enable_zerocopy_send_client": false, 00:35:05.340 "zerocopy_threshold": 0, 00:35:05.340 "tls_version": 0, 00:35:05.340 "enable_ktls": false 00:35:05.340 } 00:35:05.340 } 00:35:05.340 ] 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "subsystem": "vmd", 00:35:05.340 "config": [] 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "subsystem": "accel", 00:35:05.340 "config": [ 00:35:05.340 { 00:35:05.340 "method": "accel_set_options", 00:35:05.340 "params": { 00:35:05.340 "small_cache_size": 128, 00:35:05.340 "large_cache_size": 16, 00:35:05.340 "task_count": 2048, 00:35:05.340 "sequence_count": 2048, 00:35:05.340 "buf_count": 2048 00:35:05.340 } 00:35:05.340 } 00:35:05.340 ] 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "subsystem": "bdev", 00:35:05.340 "config": [ 00:35:05.340 { 00:35:05.340 "method": "bdev_set_options", 00:35:05.340 "params": { 00:35:05.340 "bdev_io_pool_size": 65535, 00:35:05.340 "bdev_io_cache_size": 256, 00:35:05.340 "bdev_auto_examine": true, 00:35:05.340 "iobuf_small_cache_size": 128, 00:35:05.340 "iobuf_large_cache_size": 16 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "bdev_raid_set_options", 00:35:05.340 "params": { 00:35:05.340 "process_window_size_kb": 1024, 00:35:05.340 "process_max_bandwidth_mb_sec": 0 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "bdev_iscsi_set_options", 00:35:05.340 "params": { 00:35:05.340 "timeout_sec": 30 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "bdev_nvme_set_options", 00:35:05.340 "params": { 00:35:05.340 "action_on_timeout": "none", 00:35:05.340 "timeout_us": 0, 00:35:05.340 "timeout_admin_us": 0, 00:35:05.340 "keep_alive_timeout_ms": 10000, 00:35:05.340 "arbitration_burst": 0, 00:35:05.340 "low_priority_weight": 0, 00:35:05.340 "medium_priority_weight": 0, 00:35:05.340 "high_priority_weight": 0, 00:35:05.340 "nvme_adminq_poll_period_us": 10000, 00:35:05.340 "nvme_ioq_poll_period_us": 0, 00:35:05.340 "io_queue_requests": 512, 00:35:05.340 "delay_cmd_submit": true, 00:35:05.340 "transport_retry_count": 4, 00:35:05.340 "bdev_retry_count": 3, 00:35:05.340 "transport_ack_timeout": 0, 00:35:05.340 "ctrlr_loss_timeout_sec": 0, 00:35:05.340 "reconnect_delay_sec": 0, 00:35:05.340 "fast_io_fail_timeout_sec": 0, 00:35:05.340 "disable_auto_failback": false, 00:35:05.340 "generate_uuids": false, 00:35:05.340 "transport_tos": 0, 00:35:05.340 "nvme_error_stat": false, 00:35:05.340 "rdma_srq_size": 0, 00:35:05.340 "io_path_stat": false, 00:35:05.340 "allow_accel_sequence": false, 00:35:05.340 "rdma_max_cq_size": 0, 00:35:05.340 "rdma_cm_event_timeout_ms": 0, 00:35:05.340 "dhchap_digests": [ 00:35:05.340 "sha256", 00:35:05.340 "sha384", 00:35:05.340 "sha512" 00:35:05.340 ], 00:35:05.340 "dhchap_dhgroups": [ 00:35:05.340 "null", 00:35:05.340 "ffdhe2048", 00:35:05.340 "ffdhe3072", 00:35:05.340 "ffdhe4096", 00:35:05.340 "ffdhe6144", 00:35:05.340 "ffdhe8192" 00:35:05.340 ] 00:35:05.340 } 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "method": "bdev_nvme_attach_controller", 00:35:05.340 "params": { 00:35:05.341 "name": "nvme0", 00:35:05.341 "trtype": "TCP", 00:35:05.341 "adrfam": "IPv4", 00:35:05.341 "traddr": "127.0.0.1", 00:35:05.341 "trsvcid": "4420", 00:35:05.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.341 "prchk_reftag": false, 00:35:05.341 "prchk_guard": false, 00:35:05.341 "ctrlr_loss_timeout_sec": 0, 00:35:05.341 "reconnect_delay_sec": 0, 00:35:05.341 "fast_io_fail_timeout_sec": 0, 00:35:05.341 "psk": "key0", 00:35:05.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.341 "hdgst": false, 00:35:05.341 "ddgst": false, 00:35:05.341 "multipath": "multipath" 00:35:05.341 } 00:35:05.341 }, 00:35:05.341 { 00:35:05.341 "method": "bdev_nvme_set_hotplug", 00:35:05.341 "params": { 00:35:05.341 "period_us": 100000, 00:35:05.341 "enable": false 00:35:05.341 } 00:35:05.341 }, 00:35:05.341 { 00:35:05.341 "method": "bdev_wait_for_examine" 00:35:05.341 } 00:35:05.341 ] 00:35:05.341 }, 00:35:05.341 { 00:35:05.341 "subsystem": "nbd", 00:35:05.341 "config": [] 00:35:05.341 } 00:35:05.341 ] 00:35:05.341 }' 00:35:05.341 15:28:07 keyring_file -- keyring/file.sh@115 -- # killprocess 1706982 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1706982 ']' 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1706982 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706982 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706982' 00:35:05.341 killing process with pid 1706982 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@973 -- # kill 1706982 00:35:05.341 Received shutdown signal, test time was about 1.000000 seconds 00:35:05.341 00:35:05.341 Latency(us) 00:35:05.341 [2024-12-09T14:28:07.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.341 [2024-12-09T14:28:07.136Z] =================================================================================================================== 00:35:05.341 [2024-12-09T14:28:07.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.341 15:28:07 keyring_file -- common/autotest_common.sh@978 -- # wait 1706982 00:35:05.600 15:28:07 keyring_file -- keyring/file.sh@118 -- # bperfpid=1708531 00:35:05.600 15:28:07 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1708531 /var/tmp/bperf.sock 00:35:05.600 15:28:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1708531 ']' 00:35:05.600 15:28:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.600 15:28:07 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:05.600 15:28:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.600 15:28:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.600 15:28:07 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:05.600 "subsystems": [ 00:35:05.600 { 00:35:05.600 "subsystem": "keyring", 00:35:05.600 "config": [ 00:35:05.600 { 00:35:05.600 "method": "keyring_file_add_key", 00:35:05.600 "params": { 00:35:05.600 "name": "key0", 00:35:05.600 "path": "/tmp/tmp.FGmqXdumSx" 00:35:05.600 } 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "method": "keyring_file_add_key", 00:35:05.600 "params": { 00:35:05.600 "name": "key1", 00:35:05.600 "path": "/tmp/tmp.Tll5ECG18z" 00:35:05.600 } 00:35:05.600 } 00:35:05.600 ] 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "subsystem": "iobuf", 00:35:05.600 "config": [ 00:35:05.600 { 00:35:05.600 "method": "iobuf_set_options", 00:35:05.600 "params": { 00:35:05.600 "small_pool_count": 8192, 00:35:05.600 "large_pool_count": 1024, 00:35:05.600 "small_bufsize": 8192, 00:35:05.600 "large_bufsize": 135168, 00:35:05.600 "enable_numa": false 00:35:05.600 } 00:35:05.600 } 00:35:05.600 ] 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "subsystem": "sock", 00:35:05.600 "config": [ 00:35:05.600 { 00:35:05.600 "method": "sock_set_default_impl", 00:35:05.600 "params": { 00:35:05.600 "impl_name": "posix" 00:35:05.600 } 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "method": "sock_impl_set_options", 00:35:05.600 "params": { 00:35:05.600 "impl_name": "ssl", 00:35:05.600 "recv_buf_size": 4096, 00:35:05.600 "send_buf_size": 4096, 00:35:05.600 "enable_recv_pipe": true, 00:35:05.600 "enable_quickack": false, 00:35:05.600 "enable_placement_id": 0, 00:35:05.600 "enable_zerocopy_send_server": true, 00:35:05.600 "enable_zerocopy_send_client": false, 00:35:05.600 "zerocopy_threshold": 0, 00:35:05.600 "tls_version": 0, 00:35:05.600 "enable_ktls": false 00:35:05.600 } 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "method": "sock_impl_set_options", 00:35:05.600 "params": { 00:35:05.600 "impl_name": "posix", 00:35:05.600 "recv_buf_size": 2097152, 00:35:05.600 "send_buf_size": 2097152, 00:35:05.600 "enable_recv_pipe": true, 00:35:05.600 "enable_quickack": false, 00:35:05.600 "enable_placement_id": 0, 00:35:05.600 "enable_zerocopy_send_server": true, 00:35:05.600 "enable_zerocopy_send_client": false, 00:35:05.600 "zerocopy_threshold": 0, 00:35:05.600 "tls_version": 0, 00:35:05.600 "enable_ktls": false 00:35:05.600 } 00:35:05.600 } 00:35:05.600 ] 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "subsystem": "vmd", 00:35:05.600 "config": [] 00:35:05.600 }, 00:35:05.600 { 00:35:05.600 "subsystem": "accel", 00:35:05.600 "config": [ 00:35:05.600 { 00:35:05.600 "method": "accel_set_options", 00:35:05.600 "params": { 00:35:05.600 "small_cache_size": 128, 00:35:05.601 "large_cache_size": 16, 00:35:05.601 "task_count": 2048, 00:35:05.601 "sequence_count": 2048, 00:35:05.601 "buf_count": 2048 00:35:05.601 } 00:35:05.601 } 00:35:05.601 ] 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "subsystem": "bdev", 00:35:05.601 "config": [ 00:35:05.601 { 00:35:05.601 "method": "bdev_set_options", 00:35:05.601 "params": { 00:35:05.601 "bdev_io_pool_size": 65535, 00:35:05.601 "bdev_io_cache_size": 256, 00:35:05.601 "bdev_auto_examine": true, 00:35:05.601 "iobuf_small_cache_size": 128, 00:35:05.601 "iobuf_large_cache_size": 16 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_raid_set_options", 00:35:05.601 "params": { 00:35:05.601 "process_window_size_kb": 1024, 00:35:05.601 "process_max_bandwidth_mb_sec": 0 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_iscsi_set_options", 00:35:05.601 "params": { 00:35:05.601 "timeout_sec": 30 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_nvme_set_options", 00:35:05.601 "params": { 00:35:05.601 "action_on_timeout": "none", 00:35:05.601 "timeout_us": 0, 00:35:05.601 "timeout_admin_us": 0, 00:35:05.601 "keep_alive_timeout_ms": 10000, 00:35:05.601 "arbitration_burst": 0, 00:35:05.601 "low_priority_weight": 0, 00:35:05.601 "medium_priority_weight": 0, 00:35:05.601 "high_priority_weight": 0, 00:35:05.601 "nvme_adminq_poll_period_us": 10000, 00:35:05.601 "nvme_ioq_poll_period_us": 0, 00:35:05.601 "io_queue_requests": 512, 00:35:05.601 "delay_cmd_submit": true, 00:35:05.601 "transport_retry_count": 4, 00:35:05.601 "bdev_retry_count": 3, 00:35:05.601 "transport_ack_timeout": 0, 00:35:05.601 "ctrlr_loss_timeout_sec": 0, 00:35:05.601 "reconnect_delay_sec": 0, 00:35:05.601 "fast_io_fail_timeout_sec": 0, 00:35:05.601 "disable_auto_failback": false, 00:35:05.601 "generate_uuids": false, 00:35:05.601 "transport_tos": 0, 00:35:05.601 "nvme_error_stat": false, 00:35:05.601 "rdma_srq_size": 0, 00:35:05.601 "io_path_stat": false, 00:35:05.601 "allow_accel_sequence": false, 00:35:05.601 "rdma_max_cq_size": 0, 00:35:05.601 "rdma_cm_event_timeout_ms": 0, 00:35:05.601 "dhchap_digests": [ 00:35:05.601 "sha256", 00:35:05.601 "sha384", 00:35:05.601 "sha512" 00:35:05.601 ], 00:35:05.601 "dhchap_dhgroups": [ 00:35:05.601 "null", 00:35:05.601 "ffdhe2048", 00:35:05.601 "ffdhe3072", 00:35:05.601 "ffdhe4096", 00:35:05.601 "ffdhe6144", 00:35:05.601 "ffdhe8192" 00:35:05.601 ] 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_nvme_attach_controller", 00:35:05.601 "params": { 00:35:05.601 "name": "nvme0", 00:35:05.601 "trtype": "TCP", 00:35:05.601 "adrfam": "IPv4", 00:35:05.601 "traddr": "127.0.0.1", 00:35:05.601 "trsvcid": "4420", 00:35:05.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.601 "prchk_reftag": false, 00:35:05.601 "prchk_guard": false, 00:35:05.601 "ctrlr_loss_timeout_sec": 0, 00:35:05.601 "reconnect_delay_sec": 0, 00:35:05.601 "fast_io_fail_timeout_sec": 0, 00:35:05.601 "psk": "key0", 00:35:05.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.601 "hdgst": false, 00:35:05.601 "ddgst": false, 00:35:05.601 "multipath": "multipath" 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_nvme_set_hotplug", 00:35:05.601 "params": { 00:35:05.601 "period_us": 100000, 00:35:05.601 "enable": false 00:35:05.601 } 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "method": "bdev_wait_for_examine" 00:35:05.601 } 00:35:05.601 ] 00:35:05.601 }, 00:35:05.601 { 00:35:05.601 "subsystem": "nbd", 00:35:05.601 "config": [] 00:35:05.601 } 00:35:05.601 ] 00:35:05.601 }' 00:35:05.601 15:28:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.601 15:28:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.601 [2024-12-09 15:28:07.307924] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:35:05.601 [2024-12-09 15:28:07.307971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708531 ] 00:35:05.601 [2024-12-09 15:28:07.365924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.860 [2024-12-09 15:28:07.408175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.860 [2024-12-09 15:28:07.569695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:06.427 15:28:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.427 15:28:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.428 15:28:08 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:06.428 15:28:08 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:06.428 15:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.686 15:28:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:06.686 15:28:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:06.686 15:28:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.686 15:28:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.686 15:28:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.686 15:28:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.686 15:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.945 15:28:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:06.945 15:28:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:06.945 15:28:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.945 15:28:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.945 15:28:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.945 15:28:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.945 15:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:07.204 15:28:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FGmqXdumSx /tmp/tmp.Tll5ECG18z 00:35:07.204 15:28:08 keyring_file -- keyring/file.sh@20 -- # killprocess 1708531 00:35:07.204 15:28:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1708531 ']' 00:35:07.204 15:28:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1708531 00:35:07.204 15:28:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:07.204 15:28:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.204 15:28:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708531 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708531' 00:35:07.463 killing process with pid 1708531 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@973 -- # kill 1708531 00:35:07.463 Received shutdown signal, test time was about 1.000000 seconds 00:35:07.463 00:35:07.463 Latency(us) 00:35:07.463 [2024-12-09T14:28:09.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.463 [2024-12-09T14:28:09.258Z] =================================================================================================================== 00:35:07.463 [2024-12-09T14:28:09.258Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@978 -- # wait 1708531 00:35:07.463 15:28:09 keyring_file -- keyring/file.sh@21 -- # killprocess 1706972 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1706972 ']' 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1706972 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706972 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706972' 00:35:07.463 killing process with pid 1706972 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@973 -- # kill 1706972 00:35:07.463 15:28:09 keyring_file -- common/autotest_common.sh@978 -- # wait 1706972 00:35:08.031 00:35:08.031 real 0m11.823s 00:35:08.031 user 0m29.443s 00:35:08.031 sys 0m2.703s 00:35:08.031 15:28:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.031 15:28:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:08.031 ************************************ 00:35:08.031 END TEST keyring_file 00:35:08.031 ************************************ 00:35:08.031 15:28:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:08.031 15:28:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:08.031 15:28:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:08.031 15:28:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.031 15:28:09 -- common/autotest_common.sh@10 -- # set +x 00:35:08.031 ************************************ 00:35:08.031 START TEST keyring_linux 00:35:08.032 ************************************ 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:08.032 Joined session keyring: 988934557 00:35:08.032 * Looking for test storage... 00:35:08.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:08.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.032 --rc genhtml_branch_coverage=1 00:35:08.032 --rc genhtml_function_coverage=1 00:35:08.032 --rc genhtml_legend=1 00:35:08.032 --rc geninfo_all_blocks=1 00:35:08.032 --rc geninfo_unexecuted_blocks=1 00:35:08.032 00:35:08.032 ' 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:08.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.032 --rc genhtml_branch_coverage=1 00:35:08.032 --rc genhtml_function_coverage=1 00:35:08.032 --rc genhtml_legend=1 00:35:08.032 --rc geninfo_all_blocks=1 00:35:08.032 --rc geninfo_unexecuted_blocks=1 00:35:08.032 00:35:08.032 ' 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:08.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.032 --rc genhtml_branch_coverage=1 00:35:08.032 --rc genhtml_function_coverage=1 00:35:08.032 --rc genhtml_legend=1 00:35:08.032 --rc geninfo_all_blocks=1 00:35:08.032 --rc geninfo_unexecuted_blocks=1 00:35:08.032 00:35:08.032 ' 00:35:08.032 15:28:09 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:08.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.032 --rc genhtml_branch_coverage=1 00:35:08.032 --rc genhtml_function_coverage=1 00:35:08.032 --rc genhtml_legend=1 00:35:08.032 --rc geninfo_all_blocks=1 00:35:08.032 --rc geninfo_unexecuted_blocks=1 00:35:08.032 00:35:08.032 ' 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.032 15:28:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.032 15:28:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.032 15:28:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.032 15:28:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.032 15:28:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:08.032 15:28:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:08.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:08.032 15:28:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:08.032 15:28:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:08.032 15:28:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:08.033 15:28:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:08.033 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:08.033 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:08.033 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:08.033 15:28:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:08.291 /tmp/:spdk-test:key0 00:35:08.291 15:28:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:08.291 15:28:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:08.291 15:28:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:08.291 /tmp/:spdk-test:key1 00:35:08.291 15:28:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1709023 00:35:08.291 15:28:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1709023 00:35:08.291 15:28:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1709023 ']' 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.291 15:28:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.291 [2024-12-09 15:28:09.928506] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:35:08.291 [2024-12-09 15:28:09.928553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709023 ] 00:35:08.291 [2024-12-09 15:28:10.002119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.292 [2024-12-09 15:28:10.048510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.551 [2024-12-09 15:28:10.271273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.551 null0 00:35:08.551 [2024-12-09 15:28:10.303322] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:08.551 [2024-12-09 15:28:10.303607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:08.551 753923040 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:08.551 793930257 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1709169 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:08.551 15:28:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1709169 /var/tmp/bperf.sock 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1709169 ']' 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.551 15:28:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:08.810 [2024-12-09 15:28:10.374213] Starting SPDK v25.01-pre git sha1 3318278a6 / DPDK 24.03.0 initialization... 00:35:08.810 [2024-12-09 15:28:10.374262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709169 ] 00:35:08.810 [2024-12-09 15:28:10.447510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.810 [2024-12-09 15:28:10.488364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.810 15:28:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.810 15:28:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:08.810 15:28:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:08.810 15:28:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:09.068 15:28:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:09.068 15:28:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:09.327 15:28:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:09.327 15:28:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:09.586 [2024-12-09 15:28:11.157253] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:09.586 nvme0n1 00:35:09.586 15:28:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:09.586 15:28:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:09.586 15:28:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:09.586 15:28:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:09.586 15:28:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.586 15:28:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:09.845 15:28:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:09.845 15:28:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:09.845 15:28:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:09.845 15:28:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:09.845 15:28:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.845 15:28:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:09.845 15:28:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@25 -- # sn=753923040 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 753923040 == \7\5\3\9\2\3\0\4\0 ]] 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 753923040 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:10.104 15:28:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:10.104 Running I/O for 1 seconds... 00:35:11.040 21727.00 IOPS, 84.87 MiB/s 00:35:11.040 Latency(us) 00:35:11.040 [2024-12-09T14:28:12.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.040 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:11.040 nvme0n1 : 1.01 21727.89 84.87 0.00 0.00 5871.63 4930.80 13419.28 00:35:11.040 [2024-12-09T14:28:12.835Z] =================================================================================================================== 00:35:11.040 [2024-12-09T14:28:12.835Z] Total : 21727.89 84.87 0.00 0.00 5871.63 4930.80 13419.28 00:35:11.040 { 00:35:11.040 "results": [ 00:35:11.040 { 00:35:11.040 "job": "nvme0n1", 00:35:11.040 "core_mask": "0x2", 00:35:11.040 "workload": "randread", 00:35:11.040 "status": "finished", 00:35:11.040 "queue_depth": 128, 00:35:11.040 "io_size": 4096, 00:35:11.040 "runtime": 1.00585, 00:35:11.040 "iops": 21727.89183277825, 00:35:11.040 "mibps": 84.87457747179003, 00:35:11.040 "io_failed": 0, 00:35:11.040 "io_timeout": 0, 00:35:11.040 "avg_latency_us": 5871.628204420912, 00:35:11.040 "min_latency_us": 4930.80380952381, 00:35:11.040 "max_latency_us": 13419.27619047619 00:35:11.040 } 00:35:11.040 ], 00:35:11.040 "core_count": 1 00:35:11.040 } 00:35:11.040 15:28:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:11.040 15:28:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:11.299 15:28:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:11.299 15:28:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:11.299 15:28:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:11.299 15:28:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:11.299 15:28:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:11.299 15:28:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.558 15:28:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:11.558 15:28:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:11.558 15:28:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:11.558 15:28:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.558 15:28:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.558 15:28:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:11.817 [2024-12-09 15:28:13.362028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:11.817 [2024-12-09 15:28:13.362859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31500 (107): Transport endpoint is not connected 00:35:11.817 [2024-12-09 15:28:13.363854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31500 (9): Bad file descriptor 00:35:11.817 [2024-12-09 15:28:13.364856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:11.817 [2024-12-09 15:28:13.364870] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:11.817 [2024-12-09 15:28:13.364877] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:11.818 [2024-12-09 15:28:13.364886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:11.818 request: 00:35:11.818 { 00:35:11.818 "name": "nvme0", 00:35:11.818 "trtype": "tcp", 00:35:11.818 "traddr": "127.0.0.1", 00:35:11.818 "adrfam": "ipv4", 00:35:11.818 "trsvcid": "4420", 00:35:11.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.818 "prchk_reftag": false, 00:35:11.818 "prchk_guard": false, 00:35:11.818 "hdgst": false, 00:35:11.818 "ddgst": false, 00:35:11.818 "psk": ":spdk-test:key1", 00:35:11.818 "allow_unrecognized_csi": false, 00:35:11.818 "method": "bdev_nvme_attach_controller", 00:35:11.818 "req_id": 1 00:35:11.818 } 00:35:11.818 Got JSON-RPC error response 00:35:11.818 response: 00:35:11.818 { 00:35:11.818 "code": -5, 00:35:11.818 "message": "Input/output error" 00:35:11.818 } 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@33 -- # sn=753923040 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 753923040 00:35:11.818 1 links removed 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@33 -- # sn=793930257 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 793930257 00:35:11.818 1 links removed 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1709169 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1709169 ']' 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1709169 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709169 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709169' 00:35:11.818 killing process with pid 1709169 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 1709169 00:35:11.818 Received shutdown signal, test time was about 1.000000 seconds 00:35:11.818 00:35:11.818 Latency(us) 00:35:11.818 [2024-12-09T14:28:13.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.818 [2024-12-09T14:28:13.613Z] =================================================================================================================== 00:35:11.818 [2024-12-09T14:28:13.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 1709169 00:35:11.818 15:28:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1709023 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1709023 ']' 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1709023 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.818 15:28:13 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1709023 00:35:12.077 15:28:13 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:12.077 15:28:13 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:12.077 15:28:13 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1709023' 00:35:12.077 killing process with pid 1709023 00:35:12.077 15:28:13 keyring_linux -- common/autotest_common.sh@973 -- # kill 1709023 00:35:12.077 15:28:13 keyring_linux -- common/autotest_common.sh@978 -- # wait 1709023 00:35:12.336 00:35:12.336 real 0m4.371s 00:35:12.336 user 0m8.282s 00:35:12.336 sys 0m1.431s 00:35:12.336 15:28:13 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.336 15:28:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:12.336 ************************************ 00:35:12.336 END TEST keyring_linux 00:35:12.336 ************************************ 00:35:12.336 15:28:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:12.336 15:28:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:12.336 15:28:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:12.336 15:28:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:12.336 15:28:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:12.336 15:28:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:12.336 15:28:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:12.336 15:28:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.336 15:28:13 -- common/autotest_common.sh@10 -- # set +x 00:35:12.336 15:28:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:12.336 15:28:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:12.336 15:28:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:12.336 15:28:14 -- common/autotest_common.sh@10 -- # set +x 00:35:17.609 INFO: APP EXITING 00:35:17.609 INFO: killing all VMs 00:35:17.609 INFO: killing vhost app 00:35:17.609 INFO: EXIT DONE 00:35:20.143 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:20.712 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:20.712 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:20.712 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:20.971 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:20.971 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:20.971 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:20.971 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:23.646 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:23.905 Cleaning 00:35:23.905 Removing: /var/run/dpdk/spdk0/config 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:23.905 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:23.905 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:23.905 Removing: /var/run/dpdk/spdk1/config 00:35:23.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:23.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:23.906 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:23.906 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:23.906 Removing: /var/run/dpdk/spdk2/config 00:35:23.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:23.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:23.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:24.165 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:24.165 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:24.165 Removing: /var/run/dpdk/spdk3/config 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:24.165 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:24.165 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:24.165 Removing: /var/run/dpdk/spdk4/config 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:24.165 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:24.165 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:24.165 Removing: /dev/shm/bdev_svc_trace.1 00:35:24.165 Removing: /dev/shm/nvmf_trace.0 00:35:24.165 Removing: /dev/shm/spdk_tgt_trace.pid1233652 00:35:24.165 Removing: /var/run/dpdk/spdk0 00:35:24.165 Removing: /var/run/dpdk/spdk1 00:35:24.165 Removing: /var/run/dpdk/spdk2 00:35:24.165 Removing: /var/run/dpdk/spdk3 00:35:24.165 Removing: /var/run/dpdk/spdk4 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1231381 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1232519 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1233652 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1234281 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1235217 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1235238 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1236235 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1236420 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1236689 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1238273 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1239957 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1240374 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1240620 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1240929 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1241217 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1241466 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1241709 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1241994 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1242729 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1245698 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1245947 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1246198 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1246214 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1246697 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1246704 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1247188 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1247195 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1247477 00:35:24.165 Removing: /var/run/dpdk/spdk_pid1247673 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1247825 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1247934 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1248425 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1248600 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1248941 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1252713 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1257129 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1267082 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1267757 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1272000 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1272250 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1276479 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1282298 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1285604 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1295712 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1304657 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1306359 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1307272 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1324176 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1328084 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1373450 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1378700 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1384627 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1391370 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1391433 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1392175 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1393027 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1393929 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1394405 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1394599 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1394833 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1394851 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1394853 00:35:24.424 Removing: /var/run/dpdk/spdk_pid1395759 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1396659 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1397570 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1398030 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1398062 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1398418 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1399490 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1400473 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1408626 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1437475 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1441962 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1443731 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1445373 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1445562 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1445795 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1445815 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1446313 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1448124 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1448954 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1449371 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1451647 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1452036 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1452640 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1457006 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1462944 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1462945 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1462946 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1466692 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1475250 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1479395 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1485317 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1486401 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1487817 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1489151 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1493659 00:35:24.425 Removing: /var/run/dpdk/spdk_pid1497961 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1501941 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1509706 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1509860 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1514428 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1514655 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1514881 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1515306 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1515340 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1519800 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1520360 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1524664 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1527375 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1532711 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1537995 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1546552 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1553632 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1553638 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1572862 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1573343 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1574004 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1574478 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1575210 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1575679 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1576352 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1576817 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1580985 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1581264 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1587210 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1587322 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1592727 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1596897 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1607073 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1607681 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1611742 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1612115 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1616189 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1621984 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1624542 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1634390 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1643184 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1644764 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1645673 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1662003 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1665841 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1668567 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1676228 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1676236 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1681328 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1683196 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1685135 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1686307 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1688328 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1689383 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1698816 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1699270 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1699723 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1702210 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1702672 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1703130 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1706972 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1706982 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1708531 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1709023 00:35:24.684 Removing: /var/run/dpdk/spdk_pid1709169 00:35:24.684 Clean 00:35:24.943 15:28:26 -- common/autotest_common.sh@1453 -- # return 0 00:35:24.943 15:28:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:24.943 15:28:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.943 15:28:26 -- common/autotest_common.sh@10 -- # set +x 00:35:24.943 15:28:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:24.943 15:28:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.943 15:28:26 -- common/autotest_common.sh@10 -- # set +x 00:35:24.943 15:28:26 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:24.943 15:28:26 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:24.943 15:28:26 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:24.943 15:28:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:24.943 15:28:26 -- spdk/autotest.sh@398 -- # hostname 00:35:24.943 15:28:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:25.203 geninfo: WARNING: invalid characters removed from testname! 00:35:47.138 15:28:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:48.516 15:28:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:50.422 15:28:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:52.327 15:28:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.232 15:28:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.137 15:28:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.515 15:28:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:57.515 15:28:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:57.515 15:28:59 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:57.515 15:28:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:57.515 15:28:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:57.515 15:28:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:57.774 + [[ -n 1154513 ]] 00:35:57.774 + sudo kill 1154513 00:35:57.784 [Pipeline] } 00:35:57.799 [Pipeline] // stage 00:35:57.804 [Pipeline] } 00:35:57.818 [Pipeline] // timeout 00:35:57.823 [Pipeline] } 00:35:57.837 [Pipeline] // catchError 00:35:57.842 [Pipeline] } 00:35:57.857 [Pipeline] // wrap 00:35:57.863 [Pipeline] } 00:35:57.877 [Pipeline] // catchError 00:35:57.886 [Pipeline] stage 00:35:57.888 [Pipeline] { (Epilogue) 00:35:57.901 [Pipeline] catchError 00:35:57.902 [Pipeline] { 00:35:57.914 [Pipeline] echo 00:35:57.916 Cleanup processes 00:35:57.922 [Pipeline] sh 00:35:58.208 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.208 1720140 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.221 [Pipeline] sh 00:35:58.507 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:58.507 ++ grep -v 'sudo pgrep' 00:35:58.507 ++ awk '{print $1}' 00:35:58.507 + sudo kill -9 00:35:58.507 + true 00:35:58.518 [Pipeline] sh 00:35:58.803 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:11.025 [Pipeline] sh 00:36:11.312 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:11.312 Artifacts sizes are good 00:36:11.326 [Pipeline] archiveArtifacts 00:36:11.333 Archiving artifacts 00:36:11.451 [Pipeline] sh 00:36:11.736 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:11.749 [Pipeline] cleanWs 00:36:11.758 [WS-CLEANUP] Deleting project workspace... 00:36:11.758 [WS-CLEANUP] Deferred wipeout is used... 00:36:11.764 [WS-CLEANUP] done 00:36:11.766 [Pipeline] } 00:36:11.781 [Pipeline] // catchError 00:36:11.791 [Pipeline] sh 00:36:12.157 + logger -p user.info -t JENKINS-CI 00:36:12.165 [Pipeline] } 00:36:12.178 [Pipeline] // stage 00:36:12.183 [Pipeline] } 00:36:12.196 [Pipeline] // node 00:36:12.200 [Pipeline] End of Pipeline 00:36:12.239 Finished: SUCCESS